Azure Storage for Cloud-Native Apps

Author

Posted Oct 25, 2024

Reads 332

Computer server in data center room
Credit: pexels.com, Computer server in data center room

Azure Storage is a highly scalable and secure cloud storage solution that's perfect for cloud-native apps. It provides a single access point for all your data needs.

With Azure Storage, you can store and manage large amounts of data, including blobs, files, and queues. This makes it easy to build scalable cloud-native apps that can handle high traffic and large data sets.

Azure Storage is designed to be highly available, with a SLA of 99.9% uptime. This means you can rely on it to keep your data safe and accessible at all times.

Accessing Azure Storage

Accessing Azure Storage is a straightforward process, and there are several methods to choose from. You can use Microsoft Entra integration for blob, file, queue, and table data, which is recommended for superior security and ease of use.

Azure Storage supports authentication and authorization with Microsoft Entra ID for the Blob, File, Table, and Queue services via Azure role-based access control (Azure RBAC). This method is available via the Azure portal, where you can authorize access to file data using your Microsoft Entra account.

If this caught your attention, see: Upload File to Azure Blob Storage

Credit: youtube.com, AZ-900 Episode 11 | Azure Storage Services | Blob, Queue, Table, Files, Disk and Storage Tiers

You can also use identity-based authentication over SMB for Azure Files, which supports authorization over SMB through on-premises Active Directory Domain Services (AD DS), Microsoft Entra Domain Services, or Microsoft Entra Kerberos. Additionally, you can use authorization with Shared Key, which involves passing a header with every request that is signed using the storage account access key.

Here are the available authorization methods:

  • Microsoft Entra integration for blob, file, queue, and table data
  • Identity-based authentication over SMB for Azure Files
  • Authorization with Shared Key
  • Authorization using shared access signatures (SAS)
  • Active Directory Domain Services with Azure NetApp Files

Each method has its own advantages and use cases, so it's essential to choose the one that best fits your needs.

Looking Up Account URL

To look up the account URL, you can use the Azure Portal, Azure PowerShell, or Azure CLI. You can find the storage account's blob service URL using any of these tools.

The Azure Portal is a great place to start, as it provides a user-friendly interface for managing your storage accounts. From the portal, you can navigate to your storage account and find the blob service URL in the settings.

Credit: youtube.com, Azure Tutorial on Adding Storage Using Azure Storage Explorer

Alternatively, you can use Azure PowerShell or Azure CLI to retrieve the account URL. These tools provide a command-line interface for managing Azure resources, and can be especially useful if you need to automate tasks or integrate with other tools.

Here are the tools you can use to look up the account URL:

  1. Azure Portal
  2. Azure PowerShell
  3. Azure CLI

Each of these tools can help you find the account URL quickly and easily, so you can start accessing your Azure Storage resources.

Accessing with Keys

Accessing Azure Storage with keys is a straightforward process. You can use Shared Key authorization, which involves passing a header with every request that is signed using the storage account access key.

There are multiple ways to store and access data in Azure Storage, including using Azure Blob Storage, which helps create data lakes for analytics needs and provides storage for cloud-native and mobile apps.

To use Shared Key authorization, you can fill in the account and key lines and leave the rest blank, as shown in the example configuration. This method is simple but not the most flexible way to access Azure Storage.

Credit: youtube.com, Manage access keys of an Azure Storage Account

You can also use a Shared Access Signature (SAS), which is a string containing a security token that can be appended to the URI for a storage resource. This method provides more flexibility than Shared Key authorization.

Azure Storage supports multiple storage tiers and automated lifecycle management, making it possible to store massive amounts of infrequently or rarely accessed data in a cost-efficient way.

Here are the different authorization methods supported by Azure Storage:

Note that you can leave the key field blank to use a SAS URL or the Azure Blob Emulator. The storage account access key is required for Shared Key authorization, and you can store it in the RCLONE_AZUREBLOB_KEY environment variable.

A different take: Azure Storage Account Key

Environment Authentication

Environment authentication is a flexible way to access Azure Storage resources. Rclone can pull credentials from the environment or runtime using the env_auth config parameter.

If env_auth is set, rclone tries authentication methods in this order: Environment Variables, Managed Service Identity Credentials, and Azure CLI credentials.

Credit: youtube.com, Azure Files - Assign Azure Permissions to Users/Groups That Require Access 11/13

Rclone reads configuration from environment variables in this order: Service principal with client secret, Service principal with certificate, User with username and password, and Workload Identity.

If you're using Managed Service Identity, Rclone will use the system-assigned identity by default, but you can switch to a user-assigned identity if present.

Credentials created with the az tool can be picked up using env_auth, making it a convenient option for accessing Azure Storage resources.

Here's a breakdown of the environment variable order:

Storage Account Management

To create a storage account, you can use the Azure Portal, Azure PowerShell, or Azure CLI.

You can store massive amounts of infrequently or rarely accessed data in a cost-efficient way with multiple storage tiers and automated lifecycle management.

The Azure Storage Blobs client library for Python allows you to interact with each of the components of the Azure Blob Service, including the storage account itself, a container within the storage account, and a blob within a container, through the use of a dedicated client object.

Here are the key components that make up the Azure Blob Service:

  • The storage account itself
  • A container within the storage account
  • A blob within a container

Account

Credit: youtube.com, How to manage a storage account | Azure Portal Series

To create a storage account, you can use the Azure Portal, Azure PowerShell, or Azure CLI. You can also find the storage account's blob service URL using the Azure Portal, Azure PowerShell, or Azure CLI.

The connection string to your storage account can be found in the Azure Portal under the "Access Keys" section or by running the following CLI command.

You can initialize a client instance with a storage connection string instead of providing the account URL and credential separately. This is done by passing the storage connection string to the client's from_connection_string class method.

To create a client object, you will need the storage account's blob service account URL and a credential that allows you to access the storage account. The Azure Storage Blobs client library for Python allows you to interact with three types of resources: the storage account itself, blob storage containers, and blobs.

The most straightforward way to create a client is to fill in the account and key lines and leave the rest blank. However, you can also set the Azure Storage Account Name using the --azureblob-account flag or the RCLONE_AZUREBLOB_ACCOUNT environment variable.

For more insights, see: Give Access to Azure Blob Storage

Credit: youtube.com, How To Manage Azure Storage Accounts With PowerShell

If you leave the account name blank, it will be read from the environment variable AZURE_STORAGE_ACCOUNT_NAME if possible. Similarly, you can set the Storage Account Shared Key using the --azureblob-key flag or the RCLONE_AZUREBLOB_KEY environment variable.

The following components make up the Azure Blob Service:

  • The storage account itself
  • A container within the storage account
  • A blob within a container

If you set the --azureblob-no-head-object flag to true, the client will not do a HEAD request before a GET request when getting objects. The default value for this flag is false.

Tenant

The tenant ID is a crucial piece of information for Azure Blob storage account management. It's also known as the directory ID.

You can use a service principal with a client secret, a service principal with a certificate, or even a user with a username and password to access your Azure Blob storage account.

If you're using a service principal, you'll need to specify its tenant ID in the configuration or as an environment variable. The tenant ID is a string value.

Here are the possible ways to specify the tenant ID:

  • Config: tenant
  • Env Var: RCLONE_AZUREBLOB_TENANT

The tenant ID is not always required, but it's essential to have it if you're using a service principal.

ID

Credit: youtube.com, Managed Service Identity and Storage Account

ID is an essential aspect of Azure Storage, and it's used to authenticate and authorize access to your storage account. You'll need a client ID to create a client object, which allows you to interact with your storage account.

The client ID is a string that can be configured or set as an environment variable. It's not required, but it's necessary if you want to use the Azure Storage Blobs client library for Python. You can find more information about the client ID in the Azure Storage documentation.

The client ID is used to create a client object, which is the starting point for interacting with your storage account, blob storage containers, and blobs. This is a crucial step in using the Azure Storage Blobs client library for Python.

Here's a summary of the client ID options:

  • Config: client_id
  • Env Var: RCLONE_AZUREBLOB_CLIENT_ID
  • Type: string
  • Required: false

Keep in mind that the client ID is just one part of the authentication process. You'll also need to provide a credential that allows you to access your storage account.

MSI

Credit: youtube.com, MSI Demo 1: Connecting Storage account from Virtual Machine with Manage service identity

MSI stands for Managed Service Identity, which is a feature in Azure that allows you to authenticate to Azure Storage without needing a SAS token or account key.

To use MSI, you can specify one of the three parameters: msi_object_id, msi_client_id, or msi_mi_res_id. If you specify one of these parameters, the others can be left blank.

Here are the three parameters with their corresponding config and environment variable names:

  • msi_object_id - Config: msi_object_id, Env Var: RCLONE_AZUREBLOB_MSI_OBJECT_ID
  • msi_client_id - Config: msi_client_id, Env Var: RCLONE_AZUREBLOB_MSI_CLIENT_ID
  • msi_mi_res_id - Config: msi_mi_res_id, Env Var: RCLONE_AZUREBLOB_MSI_MI_RES_ID

If you want to use MSI, you can use the --azureblob-use-msi parameter, which is a boolean parameter that defaults to false. If you set this parameter to true, the system-assigned identity will be used by default if available, or a user-assigned identity will be used if specified.

Delete Snapshots

Delete Snapshots is a crucial aspect of Storage Account Management. You can specify how to deal with snapshots on blob deletion using the delete_snapshots config.

RCLONE_AZUREBLOB_DELETE_SNAPSHOTS is an environment variable that allows you to control snapshot deletion. It's a string type variable, so you'll need to enter a specific value to configure it.

You can set the delete_snapshots config to a specific value, but it's not required. If you choose to use it, you'll have a few options to consider.

Here are the possible choices for the delete_snapshots config:

  • Config: delete_snapshots
  • Env Var: RCLONE_AZUREBLOB_DELETE_SNAPSHOTS
  • Type: string
  • Required: false

Comprehensive Management

Credit: youtube.com, Azure Storage Account : Life Cycle Management

Comprehensive Management is a crucial aspect of Storage Account Management. It involves end-to-end lifecycle management, ensuring that your data is properly managed throughout its entire life cycle.

This includes creating, storing, managing, and deleting data, all within a single, unified platform. With this approach, you can easily track and control your data's progress, from creation to deletion.

Policy-based access control is also a key component of Comprehensive Management. This feature allows you to set specific permissions and access levels for different users and groups, ensuring that sensitive data is protected and only accessible to authorized personnel.

Immutable storage, also known as WORM (Write Once, Read Many) storage, is another important aspect of Comprehensive Management. This feature ensures that data is stored in a way that prevents it from being altered or deleted, providing an additional layer of security and data integrity.

Curious to learn more? Check out: Azure Blob Lifecycle Management

Storage Account Settings

To set up your Azure Storage Account, you'll need to specify the account name. This can be done by setting the azureblob-account configuration option to the name of your Azure Storage Account. If you leave this field blank, rclone will use the SAS URL or emulator instead.

You can also set the azureblob-account option using the RCLONE_AZUREBLOB_ACCOUNT environment variable if it's available. This is a convenient way to store sensitive information securely outside of your configuration files.

Here are the ways to set the azureblob-account option:

  • Config: account
  • Env Var: RCLONE_AZUREBLOB_ACCOUNT
  • Type: string
  • Required: false

Configuration

Credit: youtube.com, How to create a storage account | Azure Portal Series

To set up a Microsoft Azure Blob Storage configuration, you'll need to follow a few steps. First, you'll need to run a command that will guide you through an interactive setup process.

This process will list the contents of a container, sync your local directory to the remote container, and delete any excess files in the container. This will help you get started with storing your files in the cloud.

To configure the retry policy, you can use the following keyword arguments when instantiating a client: retry_total, retry_connect, retry_read, retry_status, and retry_to_secondary. The default values for these arguments are 10, 3, 3, 3, and False, respectively.

Here are the default values for the retry policy arguments:

You can also configure the client certificate path and password using the following command-line arguments: --azureblob-client-certificate-path and --azureblob-client-certificate-password. The default values for these arguments are not specified, but they can be overridden using environment variables: RCLONE_AZUREBLOB_CLIENT_CERTIFICATE_PATH and RCLONE_AZUREBLOB_CLIENT_CERTIFICATE_PASSWORD.

The client certificate path can be a PEM or PKCS12 certificate file including the private key.

See what others are reading: Google Password Storage

Certificate Path

Credit: youtube.com, SSL Certificate Explained

You'll need to specify the path to a PEM or PKCS12 certificate file including the private key. This can be done using the client_certificate_path variable or the RCLONE_AZUREBLOB_CLIENT_CERTIFICATE_PATH environment variable.

The path should be a string, and it's not required. You can find more information about the client_certificate_path variable in the rclone documentation.

To set the path, you can use the --azureblob-client-certificate-path flag. This flag is not required, but it can be useful if you want to specify the path as a command-line argument.

Here's a summary of the options for specifying the certificate path:

Chunk Size

Chunk size is a crucial setting in Azure Blob storage account settings. It determines the size of each chunk uploaded to the cloud.

The default chunk size is 4Mi, which can be overridden by setting the RCLONE_AZUREBLOB_CHUNK_SIZE environment variable.

There's a trade-off to consider when choosing a chunk size: larger chunks can improve upload speeds, but may require more memory to store in RAM.

Access Tier

Credit: youtube.com, Azure Storage Account : Access Tiers

You can store massive amounts of infrequently or rarely accessed data in a cost-efficient way by using multiple storage tiers and automated lifecycle management.

Azure Blob Storage offers four access tiers: hot, cool, cold, or archive. This means you can choose the right tier for your specific data needs.

The access tier of blob storage is determined by the "access_tier" parameter, which can be set in the config or as an environment variable. If not specified, rclone will not apply any tier.

Here's a quick rundown of the access tier options:

Note that archived blobs can be restored by setting the access tier to hot, cool, or cold. If you leave the access tier blank, it will default to the account level setting.

Security and Encryption

Azure Storage offers robust security features to protect your data. You can authorize access to your storage accounts using Microsoft Entra integration for blob, file, queue, and table data, which is recommended for superior security and ease of use.

If this caught your attention, see: Azure Blob Storage Security

Credit: youtube.com, AZ 305 — Azure Storage security and encryption

Azure Storage supports multiple authorization methods, including identity-based authentication over SMB for Azure Files, authorization with Shared Key, and authorization using shared access signatures (SAS).

You can also encrypt your data at rest with Azure Storage encryption, which automatically encrypts all data prior to persisting to the storage account and decrypts it prior to retrieval. All Azure NetApp Files volumes are encrypted using the FIPS 140-2 standard.

Here are some encryption options to consider:

  • Client-side encryption: Encrypts data from the client library before sending it across the wire and decrypts the response.
  • Encryption at rest: Automatically encrypts all data prior to persisting to the storage account and decrypts it prior to retrieval.
  • Encryption configuration: Use keyword arguments to configure encryption, such as require_encryption, encryption_version, and key_encryption_key.

Secure Access

Azure Storage requires authorization for every request, and supports various authorization methods, including Microsoft Entra integration, Identity-based authentication over SMB, and Shared Key authorization.

Azure Storage supports authentication with Microsoft Entra ID, which is recommended for superior security and ease of use.

You can also use Shared Key authorization, which involves passing a header with every request that is signed using the storage account access key.

Authorization with Shared Access Signatures (SAS) is another option, which involves appending a security token to the URI for a storage resource.

Credit: youtube.com, What is Secure Access Service Edge (SASE) ?

Azure NetApp Files features such as SMB volumes, dual-protocol volumes, and NFSv4.1 Kerberos volumes are designed to be used with Active Directory Domain Services.

To use a service principal with a certificate, you'll need to set the tenant, client ID, client certificate path, and client certificate password environment variables.

If you're using a service principal with a certificate, you can also specify whether to send the certificate chain with the authentication request.

You can read credentials from the environment or runtime using the `env_auth` parameter, which tries to authenticate with environment variables, Managed Service Identity, and Azure CLI credentials in that order.

Here are the authentication methods used by `env_auth` in order:

  1. Environment Variables
  2. Managed Service Identity Credentials
  3. Azure CLI credentials (as used by the az tool)

If you're using Managed Service Identity, you can use the default system-assigned identity or a user-assigned identity, depending on your setup.

You can also use a shared access signature (SAS) token, which can be generated from the Azure Portal or using one of the `generate_sas()` functions.

To use a storage account shared key, you can provide the key as a string, which can be found in the Azure Portal or by running an Azure CLI command.

If you're using anonymous public read access, you can simply omit the credential parameter.

Ms Id

Credit: youtube.com, What is Microsoft Entra ID Protection?

Ms Id is a crucial part of Azure authentication, and understanding how it works can help you secure your data.

Rclone uses Managed Service Identity (MSI) to authenticate with Azure, and it has a specific order of preference for authentication methods. It first tries to use environment variables, then managed service identity credentials, and finally Azure CLI credentials.

If you're using MSI, you can specify the object ID of the user-assigned MSI to use with the `--azureblob-msi-object-id` flag. This flag is optional and can be left blank if you've specified the MSI client ID or resource ID instead.

Here are the specific flags and variables that Rclone uses for MSI authentication:

Rclone also supports using managed service identity credentials, which can be enabled by setting the `env_auth` config parameter to `true`. If you have a system-assigned identity on your VM or resource, Rclone will use it by default.

Disable Checksum

Disabling checksum can speed up the upload process for large files. This is because rclone normally calculates the MD5 checksum of the input before uploading, which can cause delays.

Credit: youtube.com, Security: Can the secret be derived from the checksum? Should the checksum be encrypted?

Rclone offers a configuration option to disable this feature. The option is called "disable_checksum" and can be set to true to disable the checksum calculation.

You can set this option in your rclone configuration or through an environment variable. The environment variable is called RCLONE_AZUREBLOB_DISABLE_CHECKSUM and can be set to true to disable the checksum.

Here are the details of the "disable_checksum" option:

  • Config: disable_checksum
  • Env Var: RCLONE_AZUREBLOB_DISABLE_CHECKSUM
  • Type: bool
  • Default: false

Disabling checksum is a useful feature for large files, but it's worth noting that it can also affect data integrity checking.

Encoding

Encoding plays a crucial role in ensuring the security and integrity of your data. The encoding for Azure Blob storage is configured through the `encoding` option.

The default encoding for Azure Blob storage is set to `Slash,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8`, which means it will automatically handle encoding for certain characters.

To customize the encoding, you can use the `encoding` config option or set the `RCLONE_AZUREBLOB_ENCODING` environment variable.

Here's a breakdown of the encoding options:

  • Config: encoding
  • Env Var: RCLONE_AZUREBLOB_ENCODING
  • Type: Encoding
  • Default: Slash,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8

This ensures that your data is properly encoded and secure.

Encryption

Credit: youtube.com, Encryption Technologies - CompTIA Security+ SY0-701 - 1.4

Encryption is a crucial aspect of security, and Azure Storage offers two basic kinds of encryption.

One option is client-side encryption, which allows you to encrypt data from the client library before sending it across the wire and decrypting the response. This ensures that data is protected even before it reaches Azure Storage.

Azure NetApp Files data traffic is inherently secure by design, but data-in-flight isn't encrypted by default. However, you can optionally enable NFSv4.1 and SMB3 data-in-flight encryption.

Azure Storage encryption protects and safeguards your data to meet your organizational security and compliance commitments. It automatically encrypts all data prior to persisting to the storage account and decrypts it prior to retrieval.

All Azure NetApp Files volumes are encrypted using the FIPS 140-2 standard.

To configure encryption, you can use keyword arguments when instantiating a client. These include require_encryption, encryption_version, key_encryption_key, key_resolver_function, and others.

Here are some encryption configuration options:

  • require_encryption (bool): If set to True, will enforce that objects are encrypted and decrypt them.
  • encryption_version (str): Specifies the version of encryption to use. Current options are '2.0' or '1.0' and the default value is '1.0'. Version 1.0 is deprecated, and it is highly recommended to use version 2.0.
  • key_encryption_key (object): The user-provided key-encryption-key.
  • key_resolver_function (callable): The user-provided key resolver.

Frequently Asked Questions

What is the Azure storage?

Azure Storage is a cloud-based storage solution that provides highly available, scalable, and secure storage for various data types. Accessible from anywhere in the world via HTTP or HTTPS, it's a reliable choice for storing and managing your data.

Is Azure Blob the same as S3?

Azure Blob and Amazon S3 share similar features, but they have distinct differences in storage capabilities and replication options. While they offer fast data retrieval and robust security, each has unique strengths that set them apart.

What is the difference between Azure storage and Azure blob storage?

Azure Blob Storage is ideal for unstructured data, whereas Azure File Storage is designed for structured data with shared access. If you're unsure which to choose, read on to learn more about their unique features and use cases.

What are the three types of data that can be stored in Azure?

Azure offers three primary data storage types: Blob Storage for unstructured data like images and documents, Table Storage for structured data, and Queue Storage for message-based data. These storage options cater to diverse data needs in the cloud.

What kind of data storage is most beneficial in Azure?

Blob storage is the most flexible and beneficial data storage option in Azure, allowing you to store data from multiple sources in a scalable and secure way

Tanya Hodkiewicz

Junior Assigning Editor

Tanya Hodkiewicz is a seasoned Assigning Editor with a keen eye for compelling content. With a proven track record of commissioning articles that captivate and inform, Tanya has established herself as a trusted voice in the industry. Her expertise spans a range of categories, including "Important" pieces that tackle complex, timely topics and "Decade in Review" features that offer insightful retrospectives on significant events.

Love What You Read? Stay Updated!

Join our community for insights, tips, and more.