Skip Navigation
Managing Asynchronous CPMs
Answer ID 12316   |   Last Review Date 08/04/2022

Why are CPMs not running on my site?


Process Designer, Custom Process Model (CPM) / Service Process Model (SPM)
All product versions


Asynchronous CPMs run in a queue, with records processing in a first-in/fist-out manner. Thus, it can appear to be a problem with asynchronous CPMs when there is instead a backlog in the queue. In these cases the product is working as expected. The following is a discussion of how to best manage implementations using this product feature.

What are the main considerations when using asynchronous CPMs?

The queue used to process asynchronous CPMs is also used to process Event Subscription functionality and many aspects of the base-product (ie. not customized or using the Oracle B2C Service APIs). This includes the archival and destroy of records associated with the configuration of Data Lifecycle Policy functionality.
The queue is designed to process items in a way that uses a limited set of site resources, including database connections, and to process as quickly as possible under the resulting limitations. This design allows for the records to be processed while keeping ample site resources available for other traffic/activity.

The design of this product determines that asynchronous CPMs should not be used for functionality that needs to happen immediately. The best alternative to any customization is to use base-product functionality when available. Additional functionality is added to the product in each quarterly release, and so existing customizations should be reviewed periodically to determine if there an appropriate replacement within the base-product. For information on alternatives involving site customizations see

Answer ID 11934:  Alternatives to using CPM customizations for event-driven functionality that needs to happen immediately

How does one determine the current state of the queue?

For functionality that has been implemented using asynchronous CPMs, the way to see if and when the CPMs run is through custom logging and analytics reports that filter on fields that are modified by CPMs. Database fields can be added for this purpose alone, and associated analytics reports can be set to an automated schedule to warn site administrators if there are delays in the queue. Determining the source of any queue backlog involves evaluation of custom logging, site error/info logs, and database transactions (transactions and co_trans tables).

How can delays/backlogs in the queue be avoided when using asynchronous CPMs?

The primary aspect of product-based functionality that can cause delays in processing is Data Lifecycle Policy (DLP) functionality, and particularly the destroy of archived incidents. Care should be taken so the DLP configuration on a site is set to process an even amount of incidents per day, and ideally this will represent roughly a day's worth of incidents. The records are triggered at midnight (per the default utility schedule), and the system is designed to process a certain amount per hour until the records for the day are cleared. This is designed to allow for these resource intensive processes to be run during off-hours for the site and pod, and to avoid the delay (as much as possible) of other records added to the queue during the course of normal business.When sites do use asynchronous CPMs implemented for time-sensitive functionality, the DLP functionality can become a problem. The long-term solution is to modify the system so that asynchronous CPMs are not used for such functionality.

The Oracle B2C Service Technical Support team can use an internal configuration to speed up the processing of these records (destroy of archived incidents) considerably. However, in doing so customers should be aware that the mentioned configuration modifies, and effectively disables functionality that is there to support recovery should there be an unexpected failure of the system. When this functionality is disabled, the site will process the incident archive/destroy records much faster, but there is this risk due to not using the system as designed.

What aspects of an asynchronous CPM customizations relate to delays/backlogs in the queue?

It is critically important that best practices defined for CPM customizations be followed for all CPMs, whether configured as synchronous or asynchronous. Synchronous CPMs will run within a process running asynchronous CPMs when the synchronous CPM is triggered by an API save within an asynchronous CPM. In particular, proper exception handling, appropriate PHP curl call timeouts, and use of API suppression can significantly affect the time it takes an asynchronous CPM to run to completion. For further details on this see

Answer ID 8392:  CPM/Process Designer Best Practices and Guidelines

Answer ID 7890:  Enabling API suppression in Connect for PHP Customizations

What else can the Oracle B2C Service Technical Support team do to help if there is a backlog in the queue?

Upon request, the Oracle B2C Service Technical Support team can look at internal logs and determine if the queue is processing. Relatively short delays, such as under 10 minutes, are already an indication that the queue is processing normally. Thus, questions into delays of that nature are not encouraged. Cases where the queue is found to not be processing normally will be investigated as a potential product issue.