On Monday, 11 April 2022 Acquia will be updating the policy that allows customers to create and store backup sites for their Site Factory application. Once implemented, this new policy will allow Acquia to remove any superfluous backup sites that exceed one year (365 days) in age. If customers have backup sites that are older than a year that they wish to keep, they can save them prior to 11 April 2022.
Q: How do I save backup sites that are older than one year?
In order to keep any backup sites that exceed a year old, you must download the site and save it to your own storage hardware. Acquia will no longer store any sites older than one year. Use the following to download your existing backup sites:
Q: What will happen to my backup sites that are not yet a year old?
#!/usr/bin/env php
<?php
// Disclaimer: This is an example script intended to be used as an example.
// This is provided as-is and it may not fill all or any of the specific needs
// the user of this script has. The user of this script can change or alter any
// or all of the functionalities described in this script to fit their own needs.
use Aws\S3\S3Client;
use GuzzleHttp\Client;
use GuzzleHttp\Exception\RequestException;
// Example script for copying backups of several sites through the REST API to
// an external S3 bucket.
// Two things are left up to the script user:
// - Including Guzzle and AWS SDK, which is used by request() and upload_to_s3()
// e.g. by doing:
// composer init
// composer require guzzlehttp/guzzle
// composer require aws/aws-sdk-php
require 'vendor/autoload.php';
// - Populating $config:
$config = [
// URL of a subsection inside the SF REST API; must end with sites/.
'url' => 'https://www.myfactory.acsitefactory.com/api/v1/sites/',
'api_user' => '',
'api_key' => '',
// Site IDs of the sites to process; can also be provided as CLI argument.
'sites' => [],
// Request parameter for /api/v1#List-sites.
'limit' => 100,
// Sets if the backups should be uploaded to S3. If set to FALSE, backups will
// be kept locally.
'upload_to_s3' => TRUE,
// The aws config to where it should be uploaded to.
's3_config' => [
'bucket' => '',
'region' => '',
'key' => '',
'secret' => '',
],
];
if ($argc < 2 || $argc > 3 || !in_array($argv[1], array('fetch-and-upload'), TRUE)) {
$help = <<<EOT
Usage: php backup_migration.php fetch-and-upload [site_ids].
Where:
- [site_ids] is be either a comma separated list (e.g. 111,222,333) or 'all'
EOT;
echo $help;
exit(1);
}
// Lower the 'limit' parameter to the maximum which the API allows.
if ($config['limit'] > 100) {
echo "\nLimit set to 100!";
$config['limit'] = 100;
}
// Check if the list of sites in $config is to be overridden by the provided
// input. If the input is set to 'all' then fetch the list of sites using the
// Site Factory API, otherwise it should be a comma separated list of site IDs.
if ($argc >= 3) {
if ($argv[2] == 'all') {
$config['sites'] = get_all_sites($config);
}
else {
// Removing spaces.
$no_spaces = str_replace(' ', '', $argv[2]);
// Keeping only IDs that are valid.
$config['sites'] = array_filter(explode(',', $no_spaces), "id_check");
// Removing duplicates.
$config['sites'] = array_unique($config['sites']);
}
}
// Helper; returns true if given ID is valid (numeric and > 0), false otherwise.
function id_check($id) {
return is_numeric($id) && $id > 0;
}
// Fetches the list of all sites using the Site Factory REST API.
function get_all_sites($config) {
// Starting from page 1.
$page = 1;
$sites = array();
printf("Getting all sites - Limit / request: %d\n", $config['limit']);
// Iterate through the paginated list until we get all sites, or
// an error occurs.
do {
printf("Getting sites page: %d\n", $page);
$method = 'GET';
$url = $config['url'] . "?limit=" . $config['limit'] . "&page=" . $page;
$has_another_page = FALSE;
$res = request($url, $method, $config);
if ($res->getStatusCode() != 200) {
echo "Error whilst fetching site list!\n";
exit(1);
}
$next_page_header = $res->getHeader('link');
$response = json_decode($res->getBody()->getContents());
foreach ($response->sites as $site) {
$sites[] = $site->id;
}
// If the next page header is present and has a "next" link, we know we
// have another page.
if (!empty($next_page_header) && strpos($next_page_header[0], 'rel="next"') !== FALSE) {
$has_another_page = TRUE;
$page++;
// Sleeping for 1 second to prevent DoSing the factory API.
sleep(1);
}
} while ($has_another_page);
return $sites;
}
// Helper function to return API user and key.
function get_request_auth($config) {
return [
'auth' => [$config['api_user'], $config['api_key']],
];
}
// Sends a request using the guzzle HTTP library; prints out any errors.
function request($url, $method, $config, $form_params = []) {
// We are setting http_errors => FALSE so that we can handle them ourselves.
// Otherwise, we cannot differentiate between different HTTP status codes
// since all 40X codes will just throw a ClientError exception.
$client = new Client(['http_errors' => FALSE]);
$parameters = get_request_auth($config);
if ($form_params) {
$parameters['form_params'] = $form_params;
}
try {
$res = $client->request($method, $url, $parameters);
return $res;
}
catch (RequestException $e) {
printf("Request exception!\nError message %s\n", $e->getMessage());
}
return NULL;
}
// Downloads the backup locally.
function download_backup($url, $filename) {
printf("Downloading backup %s. This might take a while..\n", $filename);
$handle = fopen($filename, 'w');
$client = new Client(array(
'base_uri' => '',
'verify' => false,
'sink' => $filename,
'curl.options' => array(
'CURLOPT_RETURNTRANSFER' => true,
'CURLOPT_FILE' => $handle
)
));
$client->get($url);
fclose($handle);
}
// Uploads the backup to the given S3 bucket.
function upload_to_s3($filename, $config) {
printf("Uploading backup %s to S3. This might take a while..\n", $filename);
$s3 = new S3Client([
'version' => 'latest',
'region' => $config['s3_config']['region'],
'credentials' => [
'key' => $config['s3_config']['key'],
'secret' => $config['s3_config']['secret'],
],
]);
$s3->putObject([
'Bucket' => $config['s3_config']['bucket'],
'Key' => $filename,
'Body' => fopen($filename, 'r'),
]);
}
// Fetches the temporary URL to download the backup from the ACSF S3 bucket.
function fetch_temporary_backup_url($backup_id, $site_nid, $config) {
$url = $config['url'] . $site_nid . '/backups/' . $backup_id . '/url';
$res = request($url, 'GET', $config);
if (!$res) {
// An exception was thrown.
printf('Error whilst fetching backup id %d for site id %d', $backup_id, $site_nid);
printf("Please check the above messages for the full error.\n");
return NULL;
}
elseif ($res->getStatusCode() != 200) {
printf('Error whilst fetching backup id %d for site id %d', $backup_id, $site_nid);
printf("HTTP code %d\n", $res->getStatusCode());
$body = json_decode($res->getBody()->getContents());
printf("Error message: %s\n", $body ? $body->message : '<empty>');
return NULL;
}
$body = json_decode($res->getBody()->getContents());
return $body->url;
}
// Iterates through each backup, downloads and stores it in the desired place.
function get_and_upload_backups($backups, $site_nid, $config) {
foreach ($backups as $backup) {
$url = fetch_temporary_backup_url($backup->id, $site_nid, $config);
preg_match('/[a-z0-9_\.]+\.gz/', $url, $matches);
$backup_name = $matches[0];
download_backup($url, $backup_name);
if ($config['upload_to_s3']) {
upload_to_s3($backup_name, $config);
printf("Upload successful!\n");
unlink($backup_name);
}
}
}
// Fetches backups from the factory and then uploads them to S3.
function backup_fetch($operation, $config) {
$endpoint = '/backups';
$message = "Fetching backups for %d.\n";
$method = 'GET';
for ($i = 0; $i < count($config['sites']); $i++) {
// Sending API request.
$url = $config['url'] . $config['sites'][$i] . $endpoint;
$res = request($url, $method, $config);
$message_site = sprintf($message, $config['sites'][$i]);
// If request returned an error, we show that and
// we continue with another site.
if (!$res) {
// An exception was thrown.
printf('Error whilst %s', $message_site);
printf("Please check the above messages for the full error.\n");
continue;
}
elseif ($res->getStatusCode() != 200) {
printf('Error whilst %s', $message_site);
printf("HTTP code %d\n", $res->getStatusCode());
$body = json_decode($res->getBody()->getContents());
printf("Error message: %s\n", $body ? $body->message : '<empty>');
continue;
}
// All good here.
echo $message_site;
$body = json_decode($res->getBody()->getContents());
get_and_upload_backups($body->backups, $config['sites'][$i], $config);
}
}
backup_fetch($argv[1], $config);
If you have backup sites stored on your Acquia Site Factory cloud storage that are less than one year old, they will remain there until they reach one year. After that time, they will expire and be removed from Acquia storage hardware. You may save them prior to that time by using the instructions above.
Q: Can I still create backup sites after 11 April 2022?
Customers can still create backup sites using their Acquia Site Factory application. Those sites will function as normal and be stored on your Acquia Cloud storage until you download them, or they expire after one year (365 days).
Q: I have some additional questions and concerns. Who can I contact?
If you have additional questions, you can contact Acquia Support by logging in to accounts.acquia.com and visiting the Acquia Help Center. We will be happy to assist you!
If this content did not answer your questions, try searching or contacting our support team for further assistance.
Wed Oct 22 2025 08:59:29 GMT+0000 (Coordinated Universal Time)