---
title: "Workaround sites failing during Acquia Cloud Site Factory Staging Process post-db-copy"
date: "2022-02-25T01:00:26+00:00"
summary:
image:
type: "article"
url: "/site-factory/help/94551-workaround-sites-failing-during-acquia-cloud-site-factory-staging-process-post-db-copy"
id: "089599b7-573b-466a-9ff2-381715f2c645"
---

Problem
-------

During a stage for a site factory site you may see an error that looks like the following within your WIP logs:

    The database copy task to the target stage has encountered an error. Acquia Cloud task id: XXXXXXXX.

To better look at the underlying error, you can export the [WIP log (Work In Progress or Task log)](/node/57334) to gather more information. In it you might see an error like the following:

    Executing: CACHE_PREFIX='/mnt/tmp/example.01testup/drush_tmp_cache/e515653c43f29ab184ddec067e5a3eab' AH_SITE_ENVIRONMENT='01testup' \drush8 -r '/var/www/html/example.01testup/docroot' -l 'https://example.com' -y acsf-site-scrub;
    
    In DefaultFactory.php line 97:
                                                                                   
      Plugin (config_ignore) instance class "Drupal\config_ignore\Plugin\ConfigFi  
      lter\IgnoreFilter" does not exist.                                           
                                                                                   
    
    Command execution returned status code: 1!

Workaround to Resolve
---------------------

As a workaround, your team can implement a `post-db-copy` hook which either runs a full cache clear/rebuild or runs an updb, to prevent the scrub script from failing. An example would be to take the following sample script, rename it something that precedes `000-acquia_required_scrub.php`, and put it into `/mnt/www/html/example.01live/hooks/common/post-db-copy/` - making sure that the file is executable:

    #!/bin/sh
    #
    # Cloud Hook: post-db-copy
    #
    # The post-db-copy hook is run whenever you use the Workflow page to copy a
    # database from one environment to another. (Note this means it is run when
    # staging a site but not when duplicating a site, because the latter happens on
    # the same environment.) See ../README.md for details.
    #
    # Usage: post-db-copy site target-env db-role source-env
    
    site="$1"
    target_env="$2"
    db_role="$3"
    source_env="$4"
    
    # You need the URI of the site factory website in order for drush to target that
    # site. Without it, the drush command will fail. The uri.php file below will
    # locate the URI based on the site, environment and db role arguments.
    uri=`/usr/bin/env php /mnt/www/html/$site.$target_env/hooks/acquia/uri.php $site $target_env $db_role`
    
    # Print a statement to the cloud log.
    echo "$site.$target_env: Received copy of database from $uri ($source_env environment)."
    
    # The websites' document root can be derived from the site/env:
    docroot="/var/www/html/$site.$target_env/docroot"
    
    # Acquia recommends the following two practices:
    # 1. Hardcode the drush version.
    # 2. When running drush, provide the docroot + url, rather than relying on
    #    aliases. This can prevent some hard to trace problems.
    DRUSH_CMD="drush8 --root=$docroot --uri=https://$uri"
    
    # Run a cache rebuild or updb
    $DRUSH_CMD cr

Please note that the above is not tested, your team should test your hook/script thoroughly before putting them onto production. Also note to make the file preceding the required scrub you could name it something like `00-acquia_pre_scrub.php`. We do have future plans to modify the required scrub script to include these features which you can view on our release notes here: [https://docs.acquia.com/site-factory/release-notes/](/node/56120)

For additional information on the scrub script please also see the following: [https://docs.acquia.com/site-factory/workflow/scrub/](/node/57348)