Streamline DocumentDB Data Migration to Azure with Ease

Author

Posted Nov 9, 2024

Reads 1.2K

Detailed view of fiber optic patch cables connecting to a blue patch panel in a data center.
Credit: pexels.com, Detailed view of fiber optic patch cables connecting to a blue patch panel in a data center.

Migrating data from DocumentDB to Azure can be a daunting task, but with the right tool, it can be done with ease. The DocumentDB data migration tool is designed to simplify this process.

It's capable of handling large volumes of data, with some users reporting successful migrations of over 100,000 documents in a single operation. This makes it an ideal solution for businesses with extensive data sets.

The tool's intuitive interface and automated processes reduce the risk of human error, ensuring a smooth and efficient migration. This is especially important when working with critical business data.

By leveraging the DocumentDB data migration tool, you can save time and resources, and focus on more strategic initiatives.

DocumentDB Data Migration

DocumentDB Data Migration is a crucial process for transferring data from one system to another. The Azure Cosmos DB Data Migration tool is a suitable option for migrating data from various sources, including SQL Server, MongoDB, and JSON files.

Credit: youtube.com, SSIS: Generating JSONs with the DocumentDB Data Migration Tool

To use the Azure Cosmos DB Data Migration tool, you need to ensure that the Microsoft .NET Framework 4.51 or higher is installed on the machine where you will run the migration tool. Additionally, you should increase the throughput to migrate a huge amount of data into the Cosmos DB container.

Here are some common sources that can be used with the Data migration tool:

  • JSON files
  • MongoDB
  • SQL Server
  • CSV files
  • Azure Table storage
  • Amazon DynamoDB
  • HBase
  • Azure Cosmos containers

It's essential to have the Azure Cosmos DB account and database setup already in place before using the migration tool.

A Brief Overview

AWS DocumentDB is a fully managed, NoSQL JSON document database that is limited in compatibility with MongoDB. It allows you to store, query, index, and aggregate critical JSON data of any size cost-effectively.

AWS DocumentDB is known for its scalability, low-latency global reads, durability, and built-in security practices. This makes it a reliable choice for storing and managing large amounts of data.

You can integrate AWS DocumentDB with vector search to find similar data points using distance metrics. This feature provides millisecond response times and helps you search millions of data within your document database based on nuanced meaning and context.

Credit: youtube.com, AWS re:Invent 2019: Migrating your databases to Amazon DocumentDB (DAT372)

AWS DocumentDB also lets you make your applications smarter and more responsive through integrations with generative AI and other machine-learning capabilities. This can be a game-changer for businesses looking to improve their data management and analytics capabilities.

Here are some key features of AWS DocumentDB:

  • Scalability
  • Low-latency global reads
  • Durability
  • Built-in security practices
  • Vector search integration
  • Generative AI and machine-learning capabilities

By understanding the features and capabilities of AWS DocumentDB, you can make an informed decision about whether it's the right choice for your data migration needs.

Types

DocumentDB's data types are a crucial consideration when migrating data from MongoDB. DocumentDB does not support all of the BSON data types.

MongoDB's BSON implementation is designed to be lightweight and fast, making it highly traversable. This is essential for efficient data encoding and decoding within different languages.

The MongoDB BSON implementation supports embedding objects and arrays within other objects and arrays, similar to JSON. This feature allows for complex data structures to be easily represented.

DocumentDB's limitations on BSON data types mean you'll need to carefully evaluate your data before migrating.

Migration Process

Credit: youtube.com, Data Migration Examples

To migrate your data into Azure Cosmos DB, you need to use the suitable migration tool that supports your Cosmos DB API type.

The Cosmos DB Data Migration tool can be used to migrate data from the SQL API and the Table API, but it does not support migrating data into Mongo DB API and Gremlin API.

Before using the Cosmos DB Data Migration tool, you need to make sure that the Microsoft .NET Framework 4.51 or higher is installed on the machine where you will run the migration tool.

To increase the data migration speed, you can increase the throughput in the Cosmos DB container.

Here are the steps to follow:

1. Download the precompiled copy of the Cosmos DB Data Migration tool from the official page.

2. Choose the source datastore, such as a JSON file, SQL Server, or MongoDB.

3. Set up the Azure Cosmos DB account and database, and obtain the connection string.

Credit: youtube.com, Meet the new Azure Cosmos DB Data Migration Tool - Ep 74

4. Create a target collection/container name on the fly in the migration tool.

5. Choose the partition key, such as /Genre, to logically and physically partition the data.

6. Set up logging to track any issues that may occur during the migration process.

7. Hit the import button to start the migration process.

The migration process may take some time, depending on the amount of data and the throughput settings.

Azure Integration

To integrate Azure Cosmos DB with your existing data, you'll need to use the suitable migration tool that supports your Cosmos DB API type. The Cosmos DB Data Migration tool can be used to migrate data from the SQL API and the Table API, but it doesn't support migrating data into Mongo DB API and Gremlin API.

Make sure to install Microsoft .NET Framework 4.51 or higher on the machine where you'll run the migration tool. This will ensure a smooth migration process.

To migrate a huge amount of data into the Cosmos DB container, increase the throughput to speed up the migration process. You can then decrease the throughput after completing the data migration operation to avoid high costs.

Migrating to Azure

Credit: youtube.com, Migrating mainframe and midrange systems to Azure | Azure Friday

Azure Cosmos DB is a NoSQL database on the Azure platform, and it can be used as a Document DB, Graph DB, or Key-value data store.

You can use various APIs to store and query data in Cosmos DB, including SQL API, MongoDB API, Azure Table API, Cassandra API, and Gremlin API.

To migrate data into Azure Cosmos DB, you need to use the suitable migration tool that supports your Cosmos DB API type.

The Cosmos DB Data Migration tool can be used to migrate data from the SQL API and the Table API, but it does not support migrating data into Mongo DB API and Gremlin API.

You can download a precompiled copy of the Cosmos DB Data Migration tool directly from the Microsoft website, which includes both a command-line version and a graphical user interface version.

To ensure successful data migration, make sure to increase the throughput in the Cosmos DB container, especially when migrating a huge amount of data.

Here are the supported data sources for the Cosmos DB Data Migration tool:

  • JSON files
  • MongoDB
  • SQL Server
  • CSV files
  • Azure Table storage
  • Amazon DynamoDB
  • HBase
  • Azure Cosmos containers

Before starting the migration process, you need to have your Azure Cosmos DB account and database setup already in place.

Sync to Snowflake Using Hevo

Credit: youtube.com, How to Sync Snowflake Users w/ Azure Active Directory

Sync to Snowflake Using Hevo is a viable option for integrating data from various sources, including AWS DocumentDB. Hevo Data is a no-code pipeline platform that automates data pipelines, making it a cost-effective solution.

Hevo Data is particularly useful for transferring data from AWS DocumentDB to Snowflake. This can be done using a real-time ELT pipeline, which is flexible to your needs.

The process of syncing data from AWS DocumentDB to Snowflake using Hevo Data is straightforward. You can use Hevo Data to cost-effectively automate data pipelines that are flexible to your needs.

Hevo Data is a scalable solution that can handle large volumes of data, making it suitable for businesses with complex data integration needs. It helps you migrate data from various sources to Snowflake, including AWS DocumentDB.

Export and Import

To export data from AWS DocumentDB, you'll need to have Amazon DocumentDB version 4.0 or above, access to your cluster, and an AWS EC2 instance in the same VPC. You'll also need to connect the EC2 instance to your DocumentDB cluster and install the mongoexport tool.

Credit: youtube.com, How to export Azure cosmos DB to Local Json | How to import Json to Azure Cosmos DB

To export data, execute the mongoexport command with the appropriate parameters, replacing your username and password with your login credentials. This will allow you to export your data in CSV format.

Here are the steps to export data from AWS DocumentDB:

  • Amazon DocumentDB version 4.0 or above.
  • Access to your Amazon DocumentDB cluster.
  • Create an AWS EC2 instance in the same VPC as your DocumentDB cluster.
  • Connect the Amazon EC2 instance to your Amazon DocumentDB.
  • Connect to your EC2 instance and install the mongoexport database tool.
  • Download the global-bundle-pem CA file and save it to an EC2 instance for Amazon DocumentDB to establish secure connections while using MongoDB database tools.

Forward Engineering

Forward Engineering is a feature that allows you to create a JSON document by example for each collection in the model, which can be applied to the DocumentDB instance. This feature is provided by Hackolade.

The script can be exported to the file system via the menu Tools > Forward-Engineering, or via the Command-Line Interface. A button lets the user apply to a selected instance the script to create databases, collections with indexes as well as sample data if desired.

You can use this feature to provide added-value in forward-engineering, making it easier to create and manage your DocumentDB instance.

Here are the steps to follow:

  • Go to Tools > Forward-Engineering
  • Choose the export option, either to the file system or via the Command-Line Interface
  • Apply the script to your selected instance to create databases, collections with indexes, and sample data.

This feature is a great time-saver, allowing you to automate the process of creating and managing your DocumentDB instance.

Export in CSV Format

Credit: youtube.com, Setup | Export and Import Task Setup Data Using CSV Files

To export data from AWS DocumentDB in CSV format, you'll need to use the mongoexport tool. This tool is available for Amazon DocumentDB version 4.0 or above.

You'll also need access to your Amazon DocumentDB cluster, as well as an AWS EC2 instance in the same VPC as your DocumentDB cluster. To connect the EC2 instance to your DocumentDB cluster, follow these steps:

1. Connect to your EC2 instance.

2. Install the mongoexport database tool.

3. Download the global-bundle-pem CA file and save it to the EC2 instance.

Once you have the necessary tools and files, you can execute the mongoexport command using the appropriate parameters. Replace your username and password with your login credentials. The command should look something like this:

* Execute the following mongoexport command using the appropriate parameters. Replace your username and password with your login credentials.

Here's a summary of the requirements to export data from AWS DocumentDB in CSV format:

Limitations and Solutions

Credit: youtube.com, DocumentDB Insider Hour | Episode 25 | Database Migration Service Best Practices

The CSV Export/Import method has some significant limitations when it comes to migrating data from AWS DocumentDB to Snowflake.

One of the main limitations is the lack of real-time analytics. If your organization relies on real-time analytics, this method would be inadequate because it cannot provide continuous, up-to-date insights into changing data trends.

Real-time analytics are essential for making informed decisions, but the CSV Export/Import method just can't keep up.

Limited scalability is another major issue. As the dataset size increases, the migration through CSV export/import can become time-consuming, resulting in delayed access to important data.

Here are some key limitations to consider:

  • Lack of real-time analytics
  • Limited scalability

Configuration and Connection

To configure Amazon DocumentDB as your source, you'll need an Active Amazon Web Services (AWS) account and Amazon DocumentDB version 4.0 or above. You'll also need to connect an Amazon EC2 instance to your Amazon DocumentDB database.

To connect to your DocumentDB cluster, you can use Studio 3T. First, download and install the latest version of Studio 3T, which is currently 2020.5. Then, inside Studio 3T, click on Connect in the top left corner of the toolbar.

Credit: youtube.com, Cosmos DB Migration Tool

Here are the steps to connect to your DocumentDB cluster using Studio 3T:

You'll also need to whitelist Hevo's IP addresses, create a security group for your DocumentDB cluster, and enable Streams on the DocumentDB cluster. Additionally, you must be assigned a Team Administrator, Team Collaborator, or Pipeline Administrator role in Hevo to configure Amazon DocumentDB as your source.

Indexes

Indexes play a crucial role in DocumentDB, allowing for efficient querying and indexing of your data.

DocumentDB creates a unique index on the _id field during collection creation, which prevents clients from inserting two documents with the same value for the _id field.

You can create user-defined indexes on a single field of a document, known as single field indexes.

Compound indexes support user-defined indexes on multiple fields, making them a powerful tool for querying complex data.

DocumentDB automatically determines whether to create a multikey index if the indexed field contains an array value.

Credit: youtube.com, How do SQL Indexes Work

You can query documents that contain arrays by matching on element or elements of the arrays using multikey indexes.

To support efficient queries of geospatial coordinate data, DocumentDB provides 2dsphere indexes that use spherical geometry to return results.

Here's a breakdown of the different types of indexes in DocumentDB:

DocumentDB also supports unique indexes, which reject duplicate values for the indexed field.

Sparse indexes ensure that the index only contains entries for documents that have the indexed field.

You can combine the unique and sparse index options to reject documents that have duplicate values for a field but ignore documents that do not have the indexed key.

Time-to-live indexes, or TTL indexes, can automatically remove documents from a collection after a certain amount of time.

Configure as Source

To configure Amazon DocumentDB as your source, you'll need an Active Amazon Web Services (AWS) account. This is a straightforward requirement, but it's essential to have a valid AWS account to proceed.

Credit: youtube.com, How to Configure ODBC to Access a Microsoft SQL Server

Amazon DocumentDB version 4.0 or above is also a must-have. This ensures you're using the latest and most compatible version of the service.

Next, connect your Amazon EC2 instance to an Amazon DocumentDB database. This will allow you to access your data and configure the necessary settings.

To ensure a smooth connection, whitelist Hevo's IP addresses. This is a crucial step to prevent any connectivity issues.

You'll also need to create a security group for the DocumentDB cluster in your AWS EC2 console. This will help you manage access to your cluster and ensure it's properly secured.

Creating a cluster parameter group in your Amazon DocumentDB console is another essential step. This will allow you to configure the necessary parameters for your cluster.

You'll also need to create an Amazon DocumentDB cluster. This will provide you with a dedicated database instance for your data.

To interact with your DocumentDB cluster, install the mongo shell for your operating system. This will give you the necessary tools to manage and query your data.

Creating a user with the necessary privileges in your Amazon DocumentDB database is also required. This will ensure you have the necessary permissions to access and manipulate your data.

An artist's illustration of artificial intelligence (AI). This image represents storage of collected data in AI. It was created by Wes Cockx as part of the Visualising AI project launched ...
Credit: pexels.com, An artist's illustration of artificial intelligence (AI). This image represents storage of collected data in AI. It was created by Wes Cockx as part of the Visualising AI project launched ...

Finally, enable Streams on the DocumentDB cluster and update the Change Stream Log Retention Duration. This will allow you to capture changes to your data in real-time and retain the logs for a specified period.

Here's a summary of the necessary steps to configure your Amazon DocumentDB source:

  • An Active Amazon Web Services (AWS) account.
  • Amazon DocumentDB version 4.0 or above.
  • Connect Amazon EC2 instance to an Amazon DocumentDB database.
  • Whitelist Hevo’s IP addresses.
  • Create a security group for the DocumentDB cluster in your AWS EC2 console.
  • Create a cluster parameter group in your Amazon DocumentDB console.
  • Create an Amazon DocumentDB cluster.
  • Install mongo shell for your operating system.
  • Create a user with the necessary privileges in your Amazon DocumentDB database.
  • Enable Streams on the DocumentDB cluster.
  • Update the Change Stream Log Retention Duration.
  • You must be assigned a Team Administrator, Team Collaborator, or Pipeline Administrator role in Hevo.

Connect to a Cluster with Studio 3T

To connect to a DocumentDB cluster with Studio 3T, you'll need to download and install the latest version of Studio 3T, which is currently 2020.5.

First, open Studio 3T and click on Connect in the top left corner of the toolbar. This will open the Connection Manager.

Click on New Connection in the top left corner of the Connection Manager toolbar to start the connection process.

You'll need to paste your Amazon DocumentDB Cluster endpoint information on the Connection tab and give your connection a name.

On the Authentication tab, enter the authentication information for your cluster, which is required for a secure connection.

Credit: youtube.com, Set-Up MongoDB Atlas Server & Connecting To Studio 3T

You'll also need to enable SSL protocol to connect by ticking the "Use SSL protocol to connect" checkbox on the SSL tab.

To download the necessary Root CA file, you can click here or use the command: wget https://s3.amazonaws.com/rds-downloads/rds-combined-ca-bundle.pem.

Additionally, you'll need to allow invalid hostnames by selecting the corresponding checkbox on the SSL tab.

To connect using SSH, you'll need to have the private key of your EC2 instance, which is the .pem file (key pair) that you saved while creating your instance in the EC2 Console.

The SSH tab will look like this once you have all the details filled in:

  • Private key of your EC2 instance
  • SSH address
  • Username

Make sure your EC2 instance is in the same VPC and security group as your DocumentDB cluster to ensure a smooth connection process.

Katrina Sanford

Writer

Katrina Sanford is a seasoned writer with a knack for crafting compelling content on a wide range of topics. Her expertise spans the realm of important issues, where she delves into thought-provoking subjects that resonate with readers. Her ability to distill complex concepts into engaging narratives has earned her a reputation as a versatile and reliable writer.

Love What You Read? Stay Updated!

Join our community for insights, tips, and more.