Error handling

Completed

The Work with tools and best practices to integrate finance and operations apps with Microsoft Power Platform module describes some errors and issues that you can encounter during the implementation and synchronization of dual-write. Now, you’ll learn about error handling in different scenarios and the actions of dual-write.

Initial synchronization

Initial synchronization is run through the first time when you connect an environment to another and the first time when a map has been created between the two fields or tables. This process pulls in or exports bulk data from the source system and target system, which ensures that the information is available in finance and operations apps and Dataverse. This process happens to one legal entity at a time, and a potential exists for much data moving over this connection.

Long running synchronization

One of the most common scenarios that you might deal with for this initial synchronization is that it runs for an extended period. In this case, several potential issues could exist.

Potential issue - The number of companies present in the initial sync

Because the initial sync runs for each company, depending on the bandwidth that you have for each system, and because it scales to 100 records a minute, it has the potential to run for a long time.

For this issue, we recommend that you pick a smaller subset of companies for the initial sync if the initial dataset is large and spans across multiple companies. You can add four to five companies at a time to ensure that the synchronizations finish and don’t time out.

Potential issue - The number of dependent entities

Dual-write was designed to prioritize upstream dependencies during the initial sync. If you’re running an initial sync for customers, the initial sync will look for all upstream dependencies and will put those first. If you pick an entity that’s further out (a leaf entity) during initial sync, then many dependencies and information might be chosen upstream to be synced across initially.

The recommendation in this scenario is to avoid picking the “leaf” entities by picking the root, or parent level, entities and then splitting the subset of the initial sync for large datasets.

Potential issue - Large volumes of data to import for finance and operations apps

This issue typically happens when you have an existing implementation and a large volume of data in Dataverse, and you need to import it into finance and operations apps. Dual-write isn’t a replacement for data migration because the migration process is much more in-depth and has checks and balances to ensure data integrity over large volumes. Initial sync was not intended to solve this issue. However, large volumes of data might exist outside of a data migration scenario. In that case, performance considerations are necessary for entity importing implementation parameters. For more information, see Data import and export jobs overview - Parallel imports and Optimize data migration for finance and operations apps.

Potential issue - Dataverse throttling with large volumes of data

Typically, this issue happens when historical data in finance and operations apps is transferred over dual-write.

To prevent this issue from occurring, you can use filters or computed columns to reduce the old or archived data that might not be required in Dataverse.

Live sync

Live sync takes over after the initial sync is complete. The dual-write maps will go into live sync mode, which continuously updates data based on the table mappings that are associated with the change tracking that’s enabled on the entities. These issues are common and occur frequently. Recommendations for dealing with these issues are explained in the following sections.

Unable to sync data

Consider a scenario where you’ve enabled dual-write, but the changes from finance and operations apps don’t seem to be syncing to Dataverse. Generally, you could perform several checks to optimize and ensure that the data that’s synced is the wanted data.

Potential issue - Changes aren’t part of the transaction

Events are triggered on pre-commit events, which is part of the check to ensure that data should be moved from finance and operations apps.

If customizations exist, we recommend that you ensure that the data is inside of the transaction. If the data that you’re expecting to sync over doesn’t show up, then it might be outside of the transaction’s scope.

Potential issue – The DoUpdates/Insert/Deletes, Record Set Operations, and Skip Business Events properties are set to true

In this case, the record set operations and the business events can’t listen to the changes. When you use these commands within code, the general intention is to skip much of the business logic.

We recommend that you keep the current Create, Update, Delete, and Update methods. You’ll need to ensure that the application modifications are part of the table level methods that are being listened to by dual-write live synch.

Potential issue - Untracked table from the entity

Data sources that don’t have any fields mapped or mapped only to a lookup, or even an outer read-only join, are not tracked.

Make sure that you revisit field mappings to ensure that the tables and fields are mapped. Otherwise, you can use the getEntityDataSourceToFieldMapping override method to include untracked data sources in the business logic for the entity.

Performance

In some cases, you might find that the integration between finance and operations apps and Dataverse through dual-write is slow and not synching quickly. In this case, several potential issues could exist.

Potential issue - The entity query plan is unoptimized

The entity query is used for identifying unique records and fields that are used for mappings.

We recommend that you run performance analysis on the entity query plan to determine if it’s working correctly and optimized. You can also reduce the complexity of the entity by discarding unnecessary joins and links. Additionally, we recommend that you run an index analysis on the ranges that are used in the query to ensure that they’re optimized.

Potential issue - Multiple tracked tables for each entity

Entity modifications are tracked for the tables that are used in mappings. If all tracked tables are modified as part of a transaction, then multiple events could exist for the same entity record, combined with an unoptimized query plan. This issue can impact performance.

For this scenario, we recommend that you take similar steps as the first potential issue, which is to analyze the query plan. You can also use the dualWriteShouldSkipDataSource override method in the business logic for the entity or extension to skip tracked tables that aren’t needed. This method, and many others, exist on CustCustomerV3Entity, which exists out of the box in finance and operations apps.

Potential issue - Similar tables are tracked for multiple entities

Common tables are tracked for multiple entities, which can raise events for all linked entities, especially if they’re running on an unoptimized query plan, which can impact the performance of the integration with dual-write.

To resolve this issue, we recommend that you run a check on the businessEventsDefinition group by the RefTableName and RefEntityName fields.

Data failures and validations

Other than performance, another common issue relates to lookup failures.

Potential issue - Dependent records aren’t available in Dataverse

Lookups include reference data that’s required as part of the resolution of a transaction. Even if the data is marked as null in Dataverse, the lookups are required if the tables and fields are mapped in dual-write.

We recommend that you analyze whether the reference data was created before or is a part of the transaction. For detailed payload information, you can use the debug mode in live sync for further investigation.

Potential issue - Dependent records aren’t created as part of the transaction

Because upstream dependencies are created as part of the transaction, they’re not resolved when the lookup resolves, so it will fail.

Lookup resolutions depend on the entity relationships in finance and operations apps. You can add relationships on all related entities that are a part of the transaction.