Examples for IP address restriction
Sample code for settings.php includes and $conf settings that help you quickly
lock down an Acquia Cloud environment using basic auth and / or IP whitelisting.
- All site lockdown logic in located in acquia.inc
- All settings are in $conf variables.
- ``$conf['ah_basic_auth_credentials']`` An array of basic auth username /
password combinations
- ``$conf['ah_whitelist']`` An array of IP addresses to allow on to the site.
- ``$conf['ah_blacklist']`` An array of IP addresses that will be denied access to the site.
- ``$conf['ah_paths_no_cache']`` Paths we should explicitly never cache.
- ``$conf['ah_paths_skip_auth']`` Skip basic authentication for these paths.
- ``$conf['ah_restricted_paths']`` Paths which may not be accessed unless the user is on the IP whitelist.
- The site lockdown process happens by calling ``ac_protect_this_site();`` with defined $conf elements.
- Whitelist / blacklist IPs may use any of the following syntax:
- CIDR (100.0.0.3/4)
- Range (100.0.0.3-100.0.5.10)
- Wildcard (100.0.0.*)
- Single (100.0.0.1)
## Business Logic
- With no $conf values set, ``ac_protect_this_site();`` will do nothing.
- If the path is marked as restricted, all users not on the whitelist will receive access denied.
- If a user's IP is on the blacklist and **not** on the whitelist they will receive access denied.
- Filling ``$conf['ah_basic_auth_credentials']`` will result in all requests being requring an .htaccess log in.
- Securing the site requires entries in both ``$conf['ah_whitelist']`` **and** ``$conf['ah_restricted_paths']``
## Examples
#### Block access to non-whitelisted users on all pages of non-production environments.
```
$conf['ah_restricted_paths'] = array(
'*',
);
$conf['ah_whitelist'] = array(
'100.0.0.*',
'100.0.0.1/5',
);
if (file_exists('/var/www/site-php')) {
require('/var/www/site-php/{site}/{site}-settings.inc');
if(!defined('DRUPAL_ROOT')) {
define('DRUPAL_ROOT', getcwd());
}
if (file_exists(DRUPAL_ROOT . '/sites/acquia.inc')) {
if (isset($_ENV['AH_NON_PRODUCTION']) && $_ENV['AH_NON_PRODUCTION']) {
require DRUPAL_ROOT . '/sites/acquia.inc';
ac_protect_this_site();
}
}
}
```
#### Block access to user and admin pages on the production environment. Enforce .htaccess authentication on non-production. Allow access to an API path without authentication
```
if (file_exists('/var/www/site-php')) {
require('/var/www/site-php/{site}/{site}-settings.inc');
if(!defined('DRUPAL_ROOT')) {
define('DRUPAL_ROOT', getcwd());
}
if (file_exists(DRUPAL_ROOT . '/sites/acquia.inc')) {
if (isset($_ENV['AH_SITE_ENVIRONMENT'])) {
if ($_ENV['AH_SITE_ENVIRONMENT'] != 'prod') {
$conf['ah_basic_auth_credentials'] = array(
'Editor' => 'Password',
'Admin' => 'P455w0rd',
);
$conf['ah_paths_no_cache'] = array(
'api'
);
}
else {
$conf['ah_restricted_paths'] = array(
'user',
'user/*',
'admin',
'admin/*',
);
$conf['ah_whitelist'] = array(
'100.0.0.9',
'100.0.0.1/5',
);
}
require DRUPAL_ROOT . '/sites/acquia.inc';
ac_protect_this_site();
}
}
}
```
#### Blacklist known bad IPs on all environments
```
$conf['ah_blacklist'] = array(
'12.13.14.15',
);
if (file_exists('/var/www/site-php')) {
require('/var/www/site-php/{site}/{site}-settings.inc');
if(!defined('DRUPAL_ROOT')) {
define('DRUPAL_ROOT', getcwd());
}
if (file_exists(DRUPAL_ROOT . '/sites/acquia.inc')) {
require DRUPAL_ROOT . '/sites/acquia.inc';
ac_protect_this_site();
}
}
```<?php
/**
* @file
* Utilities for use in protecting an environment via basic auth or IP whitelist.
*/
function ac_protect_this_site() {
global $conf;
$client_ip = ip_address();
// Test if we are using drush (command-line interface)
$cli = drupal_is_cli();
// Default to not skipping the auth check
$skip_auth_check = FALSE;
// Is the user on the VPN? Default to FALSE.
$on_vpn = $cli ? TRUE : FALSE;
if (!empty($client_ip) && !empty($conf['ah_whitelist'])) {
$on_vpn = ah_ip_in_list($client_ip, $conf['ah_whitelist']);
$skip_auth_check = $skip_auth_check || $on_vpn;
}
// If the IP is not explicitly whitelisted check to see if the IP is blacklisted.
if (!$on_vpn && !empty($client_ip) && !empty($conf['ah_blacklist'])) {
if (ah_ip_in_list($client_ip, $conf['ah_blacklist'])) {
ah_page_403($client_ip);
}
}
// Check if we should skip auth check for this page.
if (ah_path_skip_auth()) {
$skip_auth_check = TRUE;
}
// Check if we should disable cache for this page.
if (ah_path_no_cache()) {
$conf['page_cache_maximum_age'] = 0;
}
// Is the page restricted to whitelist only? Default to FALSE.
$restricted_page = FALSE;
// Check to see whether this page is restricted.
if (!empty($conf['ah_restricted_paths']) && ah_paths_restrict()) {
$restricted_page = TRUE;
}
$protect_ip = !empty($conf['ah_whitelist']);
$protect_password = !empty($conf['ah_basic_auth_credentials']);
// Do not protect command line requests, e.g. Drush.
if ($cli) {
$protect_ip = FALSE;
$protect_password = FALSE;
}
// Un-comment to disable protection, e.g. for load tests.
// $skip_auth_check = TRUE;
// $on_vpn = TRUE;
// If not on whitelisted IP prevent access to protected pages.
if ($protect_ip && !$on_vpn && $restricted_page) {
ah_page_403($client_ip);
}
// If not skipping auth, check basic auth.
if ($protect_password && !$skip_auth_check) {
ah_check_basic_auth();
}
}
/**
* Output a 403 (forbidden access) response.
*/
function ah_page_403($client_ip) {
header('HTTP/1.0 403 Forbidden');
print "403 Forbidden: Access denied ($client_ip)";
exit;
}
/**
* Output a 401 (unauthorized) response.
*/
function ah_page_401($client_ip) {
header('WWW-Authenticate: Basic realm="This site is protected"');
header('HTTP/1.0 401 Unauthorized');
print "401 Unauthorized: Access denied ($client_ip)";
exit;
}
/**
* Check basic auth against allowed values.
*/
function ah_check_basic_auth() {
global $conf;
$authorized = FALSE;
$php_auth_user = isset($_SERVER['PHP_AUTH_USER']) ? $_SERVER['PHP_AUTH_USER'] : NULL;
$php_auth_pw = isset($_SERVER['PHP_AUTH_PW']) ? $_SERVER['PHP_AUTH_PW'] : NULL;
$credentials = isset($conf['ah_basic_auth_credentials']) ? $conf['ah_basic_auth_credentials'] : NULL;
if ($php_auth_user && $php_auth_pw && !empty($credentials)) {
if (isset($credentials[$php_auth_user]) && $credentials[$php_auth_user] == $php_auth_pw) {
$authorized = TRUE;
}
}
if ($authorized) {
return;
}
// Always fall back to 401.
ah_page_401(ip_address());
}
/**
* Determine if the current path is in the list of paths to not cache.
*/
function ah_path_no_cache() {
global $conf;
$q = isset($_GET['q']) ? $_GET['q'] : NULL;
$paths = isset($conf['ah_paths_no_cache']) ? $conf['ah_paths_no_cache'] : NULL;
if (!empty($q) && !empty($paths)) {
foreach ($paths as $path) {
if ($q == $path || strpos($q, $path) === 0) {
return TRUE;
}
}
}
}
/**
* Determine if the current path is in the list of paths on which to not check
* auth.
*/
function ah_path_skip_auth() {
global $conf;
$q = isset($_GET['q']) ? $_GET['q'] : NULL;
$paths = isset($conf['ah_paths_skip_auth']) ? $conf['ah_paths_skip_auth'] : NULL;
if (!empty($q) && !empty($paths)) {
foreach ($paths as $path) {
if ($q == $path || strpos($q, $path) === 0) {
return TRUE;
}
}
}
}
/**
* Check whether a path has been restricted.
*
*/
function ah_paths_restrict() {
global $conf;
if (isset($_GET['q'])) {
// Borrow some code from drupal_match_path()
foreach ($conf['ah_restricted_paths'] as &$path) {
$path = preg_quote($path, '/');
}
$paths = preg_replace('/\\\\\*/', '.*', $conf['ah_restricted_paths']);
$paths = '/^(' . join('|', $paths) . ')$/';
// If this is a restricted path, return TRUE.
if (preg_match($paths, $_GET['q'])) {
// Do not cache restricted paths
$conf['page_cache_maximum_age'] = 0;
return TRUE;
}
}
return FALSE;
}
/**
* Determine if the IP is within the ranges defined in the white/black list.
*/
function ah_ip_in_list($ip, $list) {
foreach ($list as $item) {
// Match IPs in CIDR format.
if (strpos($item, '/') !== false) {
list($range, $mask) = explode('/', $item);
// Take the binary form of the IP and range.
$ip_dec = ip2long($ip);
$range_dec = ip2long($range);
// Verify the given IPs are valid IPv4 addresses
if (!$ip_dec || !$range_dec) {
continue;
}
// Create the binary form of netmask.
$mask_dec = ~ (pow(2, (32 - $mask)) - 1);
// Run a bitwise AND to determine whether the IP and range exist
// within the same netmask.
if (($mask_dec & $ip_dec) == ($mask_dec & $range_dec)) {
return TRUE;
}
}
// Match against wildcard IPs or IP ranges.
elseif (strpos($item, '*') !== false || strpos($item, '-') !== false) {
// Construct a range from wildcard IPs
if (strpos($item, '*') !== false) {
$item = str_replace('*', 0, $item) . '-' . str_replace('*', 255, $item);
}
// Match against ranges by converting to long IPs.
list($start, $end) = explode('-', $item);
$start_dec = ip2long($start);
$end_dec = ip2long($end);
$ip_dec = ip2long($ip);
// Verify the given IPs are valid IPv4 addresses
if (!$start_dec || !$end_dec || !$ip_dec) {
continue;
}
if ($start_dec <= $ip_dec && $ip_dec <= $end_dec) {
return TRUE;
}
}
// Match against single IPs
elseif ($ip === $item) {
return TRUE;
}
}
return FALSE;
}<?php
// This example requires `league/oauth2-client` package.
// Run `composer require league/oauth2-client` before running.
require __DIR__ . '/vendor/autoload.php';
use League\OAuth2\Client\Provider\GenericProvider;
use GuzzleHttp\Client;
// The UUID of an application you want to create the database for.
$applicationUuid = 'APP-UUID';
$dbName = 'test_database_1';
// See https://docs.acquia.com/cloud-platform/develop/api/auth/
// for how to generate a client ID and Secret.
$clientId = 'API-KEY';
$clientSecret = 'API-SECRET';
$provider = new GenericProvider([
'clientId' => $clientId,
'clientSecret' => $clientSecret,
'urlAuthorize' => '',
'urlAccessToken' => 'https://accounts.acquia.com/api/auth/oauth/token',
'urlResourceOwnerDetails' => '',
]);
$client = new Client();
$provider->setHttpClient($client);
echo 'retrieving access token', PHP_EOL;
$accessToken = $provider->getAccessToken('client_credentials');
echo 'access token retrieved', PHP_EOL;
// Generate a request object using the access token.
$request = $provider->getAuthenticatedRequest(
'POST',
"https://cloud.acquia.com/api/applications/{$applicationUuid}/databases",
$accessToken,
[
'headers' => ['Content-Type' => 'application/json'],
'body' => json_encode(['name' => $dbName])
]
);
// Send the request.
echo 'requesting db create api', PHP_EOL;
$response = $client->send($request);
echo 'response parsing', PHP_EOL;
$responseBody = json_decode($response->getBody()->getContents(), true);
$notificationLink = $responseBody['_links']['notification']['href'];
$retryCount = 10;
echo 'start watching for notification status at ', $notificationLink, PHP_EOL;
do {
sleep(5);
// create notification request.
$request = $provider->getAuthenticatedRequest(
'GET',
$notificationLink,
$accessToken
);
echo 'requesting notification status', PHP_EOL;
$response = $client->send($request);
$responseBody = json_decode($response->getBody()->getContents(), true);
echo 'notification status: ', $responseBody['status'], PHP_EOL;
if ($responseBody['status'] === 'succeeded') {
echo 'Successfully created database.';
exit(0);
} elseif ($responseBody['status'] === 'failed') {
echo 'Failed to create database.';
exit(1);
} else {
echo 'retrying notification in 5 sec', PHP_EOL;
$retryCount--;
$retry = $retryCount > 0;
}
} while ($retry);
<?php
require __DIR__ . '/vendor/autoload.php';
use League\OAuth2\Client\Provider\GenericProvider;
use League\OAuth2\Client\Provider\Exception\IdentityProviderException;
use GuzzleHttp\Client;
// See https://docs.acquia.com/cloud-platform/develop/api/auth/
// for how to generate a client ID and Secret.
$clientId = 'API Key';
$clientSecret = 'Api Secret';
$provider = new GenericProvider([
'clientId' => $clientId,
'clientSecret' => $clientSecret,
'urlAuthorize' => '',
'urlAccessToken' => 'https://accounts.acquia.com/api/auth/oauth/token',
'urlResourceOwnerDetails' => '',
]);
try {
// Try to get an access token using the client credentials grant.
$accessToken = $provider->getAccessToken('client_credentials');
// Generate a request object using the access token.
$request = $provider->getAuthenticatedRequest(
'GET',
'https://cloud.acquia.com/api/account',
$accessToken
);
// Send the request.
$client = new Client();
$response = $client->send($request);
$responseBody = $response->getBody();
} catch (IdentityProviderException $e) {
// Failed to get the access token.
exit($e->getMessage());
}
#!/bin/sh
# Script to load a database, doing some conversions along the way
# EDIT THESE
dbfilename='db-backup.sql.gz'
dbuser='root'
dbpassword='rootpassword'
dbname='mydatabase'
# Flag to say whether we want to convert from innoDB to MyISAM (1 == yes)
# It will only convert the tables matching the regexp
innodb_to_myisam=0
innodb_to_myisam_exclude_tables_regexp='^(locales_source|locales_target|menu_links|workbench_scheduler_types)$'
# Flag for converting MyISAM to InnoDB (1 == yes)
# It will only convert the tables matching the regexp
myisam_to_innodb=0
myisam_to_innodb_exclude_tables_regexp='^XXX$'
# Tables that will be created with structure only and NO data
no_data_import_tables_regexp='^(__ACQUIA_MONITORING|accesslog|batch|boost_cache|cache|cache_.*|history|queue|search_index|search_dataset|search_total|sessions|watchdog|panels_hash_database_cache|migrate_.*)$'
pv -p $dbfilename |gzip -d -c | awk -F'`' '
NR==1 {
# http://superuser.com/questions/246784/how-to-tune-mysql-for-restoration-from-mysql-dump
# TODO? http://www.palominodb.com/blog/2011/08/02/mydumper-myloader-fast-backup-and-restore ?
print "SET SQL_LOG_BIN=0;"
print "SET unique_checks=0;"
print "SET autocommit=0;"
print "SET foreign_key_checks=0;"
output=1;
}
{
start_of_line=substr($0,1,200);
# Detect beginning of table structure definition.
if (index(start_of_line, "-- Table structure for table")==1) {
output=1
print "COMMIT;"
print "SET autocommit=0;"
current_db=$2
}
# Switch the engine from InnoDB to MyISAM : MUCHO FAST.
if (substr(start_of_line,1,8)==") ENGINE") {
if ('${innodb_to_myisam:-0}' == 1) {
if (current_db ~ /'"$innodb_to_myisam_exclude_tables_regexp"'/) {
print "Skipping InnoDB -> MyISAM for " current_db >"/dev/stderr"
} else {
gsub(/=InnoDB/, "=MyISAM", $0);
#gsub(/CHARSET=utf8/, "CHARSET=latin1", $0);
}
}
if ('${myisam_to_innodb:-0}' == 1) {
if (current_db ~ /'"$myisam_to_innodb_exclude_tables_regexp"'/) {
print "Skipping MyISAM -> InnoDB for " current_db >"/dev/stderr"
} else {
gsub(/=MyISAM/, "=InnoDB", $0);
}
}
}
# Detect beginning of table data dump.
if (index(start_of_line, "-- Dumping data for table")==1) {
if (current_db != $2) {
print "Internal problem: unexpected data, seems to come from table " $2 " whereas expected table " current_db;
current_db=$2
}
printf "\r Processing table " current_db > "/dev/stderr"
output=1
# Skip data in some tables
if (current_db ~ /'"$no_data_import_tables_regexp"'/) {
output=0
print "Skipping Data import (imported structure only) for " current_db >"/dev/stderr"
}
}
if (output==1) {
print
}
}
END {
print "COMMIT;"
}' |mysql -u$dbuser --password=$dbpassword $dbname
[ req ]
default_bits = 4096
default_keyfile = private.key
distinguished_name = req_distinguished_name
req_extensions = req_ext # The extensions to add to the self signed cert
[ req_distinguished_name ]
countryName = Country Name (2 letter code)
countryName_default = US
stateOrProvinceName = State or Province Name (full name)
stateOrProvinceName_default = Massachusetts
localityName = Locality Name (eg, city)
localityName_default = Boston
organizationName = Organization Name (eg, company)
organizationName_default = Acquia
organizationalUnitName = Organizational Unit Name (department, division)
organizationalUnitName_default =
commonName = Common Name (e.g. server FQDN or YOUR name)
commonName_max = 64
commonName_default = localhost
emailAddress = Email Address (such as [email protected])
emailAddress_default =
[ req_ext ]
subjectAltName = @alt_names
[alt_names]
DNS.1 = www.example.com
DNS.2 = edit.example.com
services:
# Replaces the default lock backend with a memcache implementation.
lock:
class: Drupal\Core\Lock\LockBackendInterface
factory: memcache.lock.factory:get
<?php
/**
* @file
* Contains Drupal 7 Acquia memcache configuration to be added directly following the Acquia database require line
* (see https://docs.acquia.com/cloud-platform/manage/code/require-line/ for more info)
*/
if (getenv('AH_SITE_ENVIRONMENT') &&
isset($conf['memcache_servers'])
) {
$conf['memcache_extension'] = 'Memcached';
$conf['cache_backends'][] = 'sites/all/modules/contrib/memcache/memcache.inc';
$conf['cache_default_class'] = 'MemCacheDrupal';
$conf['cache_class_cache_form'] = 'DrupalDatabaseCache';
// Enable compression
$conf['memcache_options'][Memcached::OPT_COMPRESSION] = TRUE;
$conf['memcache_stampede_protection_ignore'] = array(
// Ignore some cids in 'cache_bootstrap'.
'cache_bootstrap' => array(
'module_implements',
'variables',
'lookup_cache',
'schema:runtime:*',
'theme_registry:runtime:*',
'_drupal_file_scan_cache',
),
// Ignore all cids in the 'cache' bin starting with 'i18n:string:'
'cache' => array(
'i18n:string:*',
),
// Disable stampede protection for the entire 'cache_path' and 'cache_rules'
// bins.
'cache_path',
'cache_rules',
);
# Move semaphore out of the database and into memory for performance purposes
$conf['lock_inc'] = 'sites/all/modules/contrib/memcache/memcache-lock.inc';
}<?php
// This file is available at
// https://docs.acquia.com/resource/simplesaml/sources/
$config = array(
// This is a authentication source which handles admin authentication.
'admin' => array(
// The default is to use core:AdminPassword, but it can be replaced with
// any authentication source.
'core:AdminPassword',
),
'default-sp' => array(
'saml:SP',
// The entityID is the entityID of the SP that the IdP is expecting.
// This value must be exactly what the IdP is expecting. If the
// entityID is not set, it defaults to the URL of the SP's metadata.
// Don't declare an entityID for Site Factory.
'entityID' => 'SP EntityID',
// If the IdP requires the SP to hold a certificate, the location
// of the self-signed certificate.
// If you need to generate a SHA256 cert, see
// https://gist.github.com/guitarte/5745b94c6883eaddabfea68887ba6ee6
'certificate' => "../cert/saml.crt",
'privatekey' => "../cert/saml.pem",
'redirect.sign' => TRUE,
'redirect.validate' => TRUE,
// The entityID of the IdP.
// This is included in the metadata from the IdP.
'idp' => 'IdP EntityID',
// NameIDFormat is included in the metadata from the IdP
'NameIDFormat' => 'urn:oasis:names:tc:SAML:2.0:nameid-format:transient',
// If the IdP does not pass any attributes, but provides a NameID in
// the authentication response, we can filter and add the value as an
// attribute.
// See https://simplesamlphp.org/docs/stable/saml:nameidattribute
'authproc' => array(
20 => array(
'class' => 'saml:NameIDAttribute',
'format' => '%V',
),
),
// The RelayState parameter needs to be set if SSL is terminated
// upstream. If you see the SAML response come back with
// https://example.com:80/saml_login, you likely need to set this.
// See https://github.com/simplesamlphp/simplesamlphp/issues/420
'RelayState' => 'https://' . $_SERVER['HTTP_HOST'] . '/saml_login',
// If working with ADFS, Microsoft may soon only allow SHA256 certs.
// You must specify signature.algorithm as SHA256.
// Defaults to SHA1 (http://www.w3.org/2000/09/xmldsig#rsa-sha1)
// See https://docs.microsoft.com/en-us/security/trusted-root/program-requirements
// 'signature.algorithm' => 'http://www.w3.org/2001/04/xmldsig-more#rsa-sha256',
),
);
#!/bin/bash
#
# Shell script to scan the default files directory with ClamAV
# Arguments:
# Email recipients: Comma separated list of email recipients wrapped in quotes
# Site environment: Site name and environment formatted like [site].[env]
#
SCAN_OUTPUT=/mnt/tmp/clamscan.log
EMAIL_RECIPIENTS=$1
SITE_ENV=$2
DATE=$(date)
CRON_OUTPUT=/var/log/sites/${SITE_ENV}/logs/$(hostname -s)/clamscan.log
if [ -d /mnt/gfs/${SITE_ENV} ]
then
{
echo -e "=============================\nStarting scan ${DATE}\n"
/usr/bin/clamscan -ri /mnt/gfs/${SITE_ENV}/sites/default/files > ${SCAN_OUTPUT}
echo -e "Checking output...\n"
cat ${SCAN_OUTPUT} | grep "FOUND"
if [ $? -eq 0 ] ; then
echo -e "FOUND VIRUS, SENDING EMAILS TO ${EMAIL_RECIPIENTS}.\n"
cat ${SCAN_OUTPUT} | mail -s "${DATE} ClamAV has detected a virus on your website files directory" "${EMAIL_RECIPIENTS}"
else
echo -e "CLEAN, NO VIRUSES FOUND.\n"
fi
echo -e "Done\n=============================\n"
} >> ${CRON_OUTPUT} 2>&1
else
echo "ERROR: directory /mnt/gfs/${SITE_ENV} is not a valid path. Please update your scheduled task with the correct [site].[env] as the second parameter"
fi
/**
* @file
* Contains Drupal 7 Acquia memcache configuration to be added directly following the Acquia database require line
* (see https://docs.acquia.com/cloud-platform/manage/code/require-line/ for more info)
*/
if (getenv('AH_SITE_ENVIRONMENT') &&
isset($conf['memcache_servers'])
) {
$conf['memcache_extension'] = 'Memcached';
$conf['cache_backends'][] = 'sites/all/modules/contrib/memcache/memcache.inc';
$conf['cache_default_class'] = 'MemCacheDrupal';
$conf['cache_class_cache_form'] = 'DrupalDatabaseCache';
// Enable compression
$conf['memcache_options'][Memcached::OPT_COMPRESSION] = TRUE;
$conf['memcache_stampede_protection_ignore'] = array(
// Ignore some cids in 'cache_bootstrap'.
'cache_bootstrap' => array(
'module_implements',
'variables',
'lookup_cache',
'schema:runtime:*',
'theme_registry:runtime:*',
'_drupal_file_scan_cache',
),
// Ignore all cids in the 'cache' bin starting with 'i18n:string:'
'cache' => array(
'i18n:string:*',
),
// Disable stampede protection for the entire 'cache_path' and 'cache_rules'
// bins.
'cache_path',
'cache_rules',
);
# Move semaphore out of the database and into memory for performance purposes
$conf['lock_inc'] = 'sites/all/modules/contrib/memcache/memcache-lock.inc';
}<?php
/**
* Process a subset of all the entities to be enqueued in a single request.
*
* @param $entity_type
* The entity type.
* @param $bundle
* The entity bundle.
* @param $bundle_key
* THe entity bundle key.
*/
function export_enqueue_entities($entity_type, $bundle, $entity_ids, &$context) {
/**
* Number of entities per iteration. Decrease this number if your site has
* too many dependencies per node.
*
* @var int $entities_per_iteration
*/
$entities_per_iteration = 5;
if (empty($context['sandbox'])) {
$context['sandbox']['progress'] = 0;
$context['sandbox']['max'] = count($entity_ids);
$context['results']['total'] = 0;
}
/** @var \Drupal\acquia_contenthub\EntityManager $entity_manager */
$entity_manager = \Drupal::service('acquia_contenthub.entity_manager');
/** @var \Drupal\acquia_contenthub\Controller\ContentHubEntityExportController $export_controller */
$export_controller = \Drupal::service('acquia_contenthub.acquia_contenthub_export_entities');
$slice_entity_ids = array_slice($entity_ids, $context['sandbox']['progress'], $entities_per_iteration);
$ids = array_values($slice_entity_ids);
if (!empty($ids)) {
$entities = \Drupal::entityTypeManager()
->getStorage($entity_type)
->loadMultiple($ids);
foreach ($entities as $entity) {
if ($entity_manager->isEligibleEntity($entity)) {
// Entity is eligible, then re-export.
$export_controller->exportEntities([$entity]);
}
}
}
$context['sandbox']['progress'] += count($ids);
$enqueued = implode(',', $ids);
$message = empty($enqueued) ? "Enqueuing '$entity_type' ($bundle) entities: No entities to queue." : "Enqueuing '$entity_type' ($bundle) entities with IDs: " . $enqueued . "\n";
$context['results']['total'] += count($ids);
$context['message'] = $message;
if ($context['sandbox']['progress'] != $context['sandbox']['max']) {
$context['finished'] = $context['sandbox']['progress'] / $context['sandbox']['max'];
}
}
function export_enqueue_finished($success, $results, $operations) {
// The 'success' parameter means no fatal PHP errors were detected. All
// other error management should be handled using 'results'.
if ($success) {
$message = 'Total number of enqueued entities: ' . $results['total'];
}
else {
$message = t('Finished with an error.');
}
drush_print($message);
}<?php
namespace Drupal\acquia_contenthub_publisher\EventSubscriber\EnqueueEligibility;
use Drupal\acquia_contenthub_publisher\AcquiaContentHubPublisherEvents;
use Drupal\acquia_contenthub_publisher\Event\ContentHubEntityEligibilityEvent;
use Drupal\file\FileInterface;
use Symfony\Component\EventDispatcher\EventSubscriberInterface;
/**
* Subscribes to entity eligibility to prevent enqueueing temporary files.
*/
class FileIsTemporary implements EventSubscriberInterface {
/**
* {@inheritdoc}
*/
public static function getSubscribedEvents() {
$events[AcquiaContentHubPublisherEvents::ENQUEUE_CANDIDATE_ENTITY][] = ['onEnqueueCandidateEntity', 50];
return $events;
}
/**
* Prevent temporary files from enqueueing.
*
* @param \Drupal\acquia_contenthub_publisher\Event\ContentHubEntityEligibilityEvent $event
* The event to determine entity eligibility.
*/
public function onEnqueueCandidateEntity(ContentHubEntityEligibilityEvent $event) {
// If this is a file with status = 0 (TEMPORARY FILE) do not export it.
// This is a check to avoid exporting temporary files.
$entity = $event->getEntity();
if ($entity instanceof FileInterface && $entity->isTemporary()) {
$event->setEligibility(FALSE);
$event->stopPropagation();
}
}
}
<?php
/**
* @file
* Add entities from Content Hub to the Import Queue.
*
* Please locate this field in the 'scripts' directory as a sibling of docroot:
* <DOCROOT>/../scripts/ach-bulk-import.php
*
* To run the script, execute the drush command:
* $drush scr ../scripts/ach-bulk-import.php
*
* Make sure to enable the Import Queue before executing this script.
*
* Notes:
*
* 1) If you want to explicitly avoid importing a particular entity type, please
* add it to the list of $global_excluded_types.
* 2) By default importing includes all dependencies. To change this behavior
* change the variable $include_dependencies to FALSE.
* 3) You can decided whether to publish entities after importing them. To
* publish entities after importing, set variable $publishing_status to 1.
* Setting $publishing_status to 0 imports them as unpublished.
* 4) You can decide to use FIFO (first exported entities are imported first),
* or LIFO (last exported entities are imported first), according to the
* $fifo variable: $fifo = 1 uses FIFO, $fifo = 0 uses LIFO.
* 5) You can set the author of the nodes to be imported locally. Example: If
* you set the $uid = 1, it will import all nodes as administrator (author
* is administrator). Change it to specific UID to use as author.
*/
use Drupal\acquia_contenthub\ContentHubEntityDependency;
use Drupal\Component\Serialization\Json;
// Global exclusion of entity types.
$global_excluded_types = [
// 'redirect' => 'redirect',
];
// Include importing dependencies. By default it is "TRUE".
$include_dependencies = TRUE;
// Determine if we want to publish imported entities or not.
// 1: Publish entities, 0: Do not publish.
$publishing_status = 1;
// If TRUE, it will import from the last page to the first (FIFO: first entities
// exported will be the first to import), otherwise will use LIFO (Last exported
// entities will be imported first).
$fifo = TRUE;
// Determine the author UUID for the nodes to be created.
$uid = 1; // administrator.
$user = \Drupal\user\Entity\User::load($uid);
$author = $user->uuid();
/** @var \Drupal\acquia_contenthub\ContentHubEntitiesTracking $entities_tracking */
$entities_tracking = \Drupal::service('acquia_contenthub.acquia_contenthub_entities_tracking');
// Loading ClientManager to be able to execute requests to Content Hub and
// to check connection.
/** @var \Drupal\acquia_contenthub\Client\ClientManager $client_manager */
$client_manager = \Drupal::service('acquia_contenthub.client_manager');
$client = $client_manager->getConnection();
// The ImportEntityManager Service allows to import entities.
/** @var \Drupal\acquia_contenthub\ImportEntityManager $import_manager */
$import_manager = \Drupal::service("acquia_contenthub.import_entity_manager");
// List all the 'dependent' entities type IDs.
$dependent_entity_type_ids = ContentHubEntityDependency::getPostDependencyEntityTypes();
$excluded_types = array_merge($global_excluded_types, $dependent_entity_type_ids);
// Checks whether the import queue has been enabled.
$import_with_queue = \Drupal::config('acquia_contenthub.entity_config')->get('import_with_queue');
if (!$import_with_queue) {
drush_user_abort('Please enable the Import Queue.');
}
// Check if the site is connected to Content Hub.
if (!$client_manager->isConnected()) {
return;
}
$list = $client_manager->createRequest('listEntities', [[]]);
$total = floor($list['total'] / 1000) * 1000;
// Starting page.
$start = $fifo ? $total : 0;
// Step
$step = $fifo ? -1000 : 1000;
// Counter of queued entities.
$i = 0;
do {
// List all entities you want to import by modifying the $options array.
/*
* Example of how to structure the $options parameter:
*
* $options = [
* 'type' => 'node',
* 'origin' => '11111111-1111-1111-1111-111111111111',
* 'filters' => [
* 'status' => 1,
* 'title' => 'New*',
* 'body' => '/Boston/',
* ],
* ];
*
*/
$options = [
'start' => $start,
];
$list = $client_manager->createRequest('listEntities', [$options]);
foreach ($list['data'] as $entity) {
$i++;
// We do not want to import "dependent" entities.
// These 3 lines are not needed in this example, but if we are listing all
// entities, make sure to exclude dependent entities to be sent directly to
// the importRemoteEntity() method because you would not be sure if their
// host (parent) entity exist in the system yet.
if (in_array($entity['type'], $excluded_types)) {
drush_print("{$i}) Skipped entity type = {$entity['type']} , UUID = {$entity['uuid']} (Dependent or excluded entity type)");
continue;
}
// Do not import the entity if it has been previously imported and has the
// same "modified" flag, which means there are no new updates on the entity.
if ($imported_entity = $entities_tracking->loadImportedByUuid($entity['uuid'])) {
if ($imported_entity->getModified() === $entity['modified']) {
drush_print("{$i}) Skipped entity type = {$entity['type']} , UUID = {$entity['uuid']} (Entity already imported)");
continue;
}
}
// Add entity to import queue.
try {
$response = $import_manager->addEntityToImportQueue($entity['uuid'], $include_dependencies, $author, $publishing_status);
$status = Json::decode($response->getContent());
if (!empty($status['status']) && $status['status'] == 200) {
drush_print("{$i}) Entity added to import queue: type = {$entity['type']} , UUID = {$entity['uuid']}");
}
else {
drush_print("{$i}) ERROR: Cannot add entity to import queue: type = {$entity['type']} , UUID = {$entity['uuid']}");
}
} catch (\Drupal\Core\Entity\EntityStorageException $ex) {
drush_print("{$i}) ERROR: Failed to add entity to import queue: type = {$entity['type']} , UUID = {$entity['uuid']} [{$ex->getMessage()}]");
}
}
$start = $start + $step;
$exit_condition = $fifo ? $start >= 0 : $start <= $total;
} while ($exit_condition);
<?php
namespace Drupal\acquia_contenthub_publisher\EventSubscriber\PublishEntities;
use Drupal\acquia_contenthub_publisher\AcquiaContentHubPublisherEvents;
use Drupal\acquia_contenthub_publisher\Event\ContentHubPublishEntitiesEvent;
use Drupal\acquia_contenthub_publisher\PublisherTracker;
use Drupal\Core\Database\Connection;
use Symfony\Component\EventDispatcher\EventSubscriberInterface;
class RemoveUnmodifiedEntities implements EventSubscriberInterface {
/**
* The database connection.
*
* @var \Drupal\Core\Database\Connection
*/
protected $database;
/**
* RemoveUnmodifiedEntities constructor.
*
* @param \Drupal\Core\Database\Connection $database
* The database connection.
*/
public function __construct(Connection $database) {
$this->database = $database;
}
/**
* {@inheritdoc}
*/
public static function getSubscribedEvents() {
$events[AcquiaContentHubPublisherEvents::PUBLISH_ENTITIES][] = ['onPublishEntities', 1000];
return $events;
}
/**
* Removes unmodified entities before publishing.
*
* @param \Drupal\acquia_contenthub_publisher\Event\ContentHubPublishEntitiesEvent $event
*/
public function onPublishEntities(ContentHubPublishEntitiesEvent $event) {
$dependencies = $event->getDependencies();
$uuids = array_keys($dependencies);
$query = $this->database->select('acquia_contenthub_publisher_export_tracking', 't')
->fields('t', ['entity_uuid', 'hash']);
$query->condition('t.entity_uuid', $uuids, 'IN');
$query->condition('t.status', [PublisherTracker::CONFIRMED, PublisherTracker::EXPORTED], 'IN');
$results = $query->execute();
foreach ($results as $result) {
// Can't check it if it doesn't have a hash.
// @todo make this a query.
if (!$result->hash) {
continue;
}
$wrapper = $dependencies[$result->entity_uuid];
if ($wrapper->getHash() == $result->hash) {
$event->removeDependency($result->entity_uuid);
}
}
}
}#!/usr/bin/env php
<?php
use GuzzleHttp\Client;
use GuzzleHttp\Exception\RequestException;
// Example script for making backups of several sites through the REST API.
// Two things are left up to the script user:
// - Including Guzzle, which is used by request();
// e.g. by doing: 'composer init; composer require guzzlehttp/guzzle'
require 'vendor/autoload.php';
// - Populating $config:
$config = [
// URL of a subsection inside the SF REST API; must end with sites/.
'url' => 'https://www.[CLIENT].acsitefactory.com/api/v1/sites/',
'api_user' => '',
'api_key' => '',
// Site IDs of the sites to process; can also be provided as CLI argument.
'sites' => [],
// Number of days before backups are deleted; can also be provided on ClI.
'backup_retention' => 30,
// Request parameter for /api/v1#List-sites.
'limit' => 100,
// The components of the websites to backup.
// Details: /api/v1#Create-a-site-backup.
// 'codebase' is excluded from the default components since those files would
// be the same in each site backup, and cannot be restored into the factory.
'components' => ['database', 'public files', 'private files', 'themes'],
];
if ($argc < 2 || $argc > 4 || !in_array($argv[1], array('backup-add', 'backup-del'), TRUE)) {
$help = <<<EOT
Usage: php application.php parameter [sites] [backup_retention=30].
Where:
- parameter is one of {backup-add, backup-del}
- [sites] is be either a comma separated list (e.g. 111,222,333) or 'all'
- [backup_retention] the number of days for which the backups should be retained. If passed this threshold they will be deleted when using backup-del command (defaults to 30 days)
EOT;
echo $help;
exit(1);
}
// Lower the 'limit' parameter to the maximum which the API allows.
if ($config['limit'] > 100) {
$config['limit'] = 100;
}
// Check if the list of sites in $config is to be overridden by the provided
// input. If the input is set to 'all' then fetch the list of sites using the
// Site Factory API, otherwise it should be a comma separated list of site IDs.
if ($argc >= 3) {
if ($argv[2] == 'all') {
$config['sites'] = get_all_sites($config);
}
else {
// Removing spaces.
$no_spaces = str_replace(' ', '', $argv[2]);
// Keeping only IDs that are valid.
$config['sites'] = array_filter(explode(',', $no_spaces), "id_check");
// Removing duplicates.
$config['sites'] = array_unique($config['sites']);
}
}
// Check if the backup_retention parameter is overwritten.
if ($argc >= 4 && id_check($argv[3])) {
$config['backup_retention'] = $argv[3];
}
// Helper; returns true if given ID is valid (numeric and > 0), false otherwise.
function id_check($id) {
return is_numeric($id) && $id > 0;
}
// Fetches the list of all sites using the Site Factory REST API.
function get_all_sites($config) {
// Starting from page 1.
$page = 1;
$sites = array();
printf("Getting all sites - Limit / request: %d\n", $config['limit']);
// Iterate through the paginated list until we get all sites, or
// an error occurs.
do {
printf("Getting sites page: %d\n", $page);
$method = 'GET';
$url = $config['url'] . "?limit=" . $config['limit'] . "&page=" . $page;
$has_another_page = FALSE;
$res = request($url, $method, $config);
if ($res->getStatusCode() != 200) {
echo "Error whilst fetching site list!\n";
exit(1);
}
$next_page_header = $res->getHeader('link');
$response = json_decode($res->getBody()->getContents());
// If the next page header is present and has a "next" link, we know we
// have another page.
if (!empty($next_page_header) && strpos($next_page_header[0], 'rel="next"') !== FALSE) {
$has_another_page = TRUE;
$page++;
}
foreach ($response->sites as $site) {
$sites[] = $site->id;
}
} while ($has_another_page);
return $sites;
}
// Helper function to return API user and key.
function get_request_auth($config) {
return [
'auth' => [$config['api_user'], $config['api_key']],
];
}
// Sends a request using the guzzle HTTP library; prints out any errors.
function request($url, $method, $config, $form_params = []) {
// We are setting http_errors => FALSE so that we can handle them ourselves.
// Otherwise, we cannot differentiate between different HTTP status codes
// since all 40X codes will just throw a ClientError exception.
$client = new Client(['http_errors' => FALSE]);
$parameters = get_request_auth($config);
if ($form_params) {
$parameters['form_params'] = $form_params;
}
try {
$res = $client->request($method, $url, $parameters);
return $res;
}
catch (RequestException $e) {
printf("Request exception!\nError message %s\n", $e->getMessage());
}
return NULL;
}
// Iterates through backups for a certain site and deletes them if they are
// past the backup_retention mark.
function backup_del($backups, $site_id, $config) {
// Iterating through existing backups for current site and deleting those
// that are X days old.
$time = $config['backup_retention'] . ' days ago';
foreach ($backups as $backup) {
$timestamp = $backup->timestamp;
if ($timestamp < strtotime($time)) {
printf("Deleting %s with backup (ID: %d).\n", $backup->label, $backup->id);
$method = 'DELETE';
$url = $config['url'] . $site_id . '/backups/' . $backup->id;
$res = request($url, $method, $config);
if (!$res || $res->getStatusCode() != 200) {
printf("Error! Whilst deleting backup ID %d. Please check the above messages for the full error.\n", $backup->id);
continue;
}
$task = json_decode($res->getBody()->getContents())->task_id;
printf("Deleting backup (ID: %d) with task ID %d.\n", $backup->id, $task);
}
else {
printf("Keeping %s since it was created sooner than %s (ID: %d).\n", $backup->label, $time, $backup->id);
}
}
}
// Creates or deletes backups depending on the operation given.
function backup($operation, $config) {
// Setting global operation endpoints and messages.
if ($operation === 'backup-add') {
$endpoint = '/backup';
$message = "Creating backup for site ID %d.\n";
$method = 'POST';
$form_params = [
'components' => $config['components'],
];
}
else {
// Unlike in other code, we do not paginate through backups, but we get the
// maximum for one request.
$endpoint = '/backups?limit=100';
$message = "Retrieving old backups for site ID %d.\n";
$method = 'GET';
$form_params = [];
}
// Iterating through the list of sites defined in secrets.php.
for ($i = 0; $i < count($config['sites']); $i++) {
// Sending API request.
$url = $config['url'] . $config['sites'][$i] . $endpoint;
$res = request($url, $method, $config, $form_params);
$message_site = sprintf($message, $config['sites'][$i]);
// If request returned an error, we show that and
// we continue with another site.
if (!$res) {
// An exception was thrown.
printf('Error whilst %s', $message_site);
printf("Please check the above messages for the full error.\n");
continue;
}
elseif ($res->getStatusCode() != 200) {
// If a site has no backups, it will return a 404.
if ($res->getStatusCode() == 404 && $operation == 'backup-del') {
printf("Site ID %d has no backups.\n", $config['sites'][$i]);
}
else {
printf('Error whilst %s', $message_site);
printf("HTTP code %d\n", $res->getStatusCode());
$body = json_decode($res->getBody()->getContents());
printf("Error message: %s\n", $body ? $body->message : '<empty>');
}
continue;
}
// All good here.
echo $message_site;
// For deleting backups, we have to iterate through the backups we get.
if ($operation == 'backup-del') {
backup_del(json_decode($res->getBody()->getContents())->backups, $config['sites'][$i], $config);
}
}
}
backup($argv[1], $config);
<?php
/**
* @file
* Contains caching configuration.
*/
use Composer\Autoload\ClassLoader;
/**
* Use memcache as cache backend.
*
* Autoload memcache classes and service container in case module is not
* installed. Avoids the need to patch core and allows for overriding the
* default backend when installing Drupal.
*
* @see https://www.drupal.org/node/2766509
*/
if (!function_exists('get_deployment_id')) {
function get_deployment_id() {
static $id = NULL;
if ($id == NULL) {
$site_settings = $GLOBALS['gardens_site_settings'];
$deployment_id_file = "/mnt/www/site-php/{$site_settings['site']}.{$site_settings['env']}/.vcs_head_ref";
if (is_readable($deployment_id_file)) {
$id = file_get_contents($deployment_id_file);
if ($id === FALSE) {
$id = NULL;
}
}
else {
$id = NULL;
}
}
return $id;
}
}
if (getenv('AH_SITE_ENVIRONMENT') &&
array_key_exists('memcache', $settings) &&
array_key_exists('servers', $settings['memcache']) &&
!empty($settings['memcache']['servers'])
) {
// Check for PHP Memcached libraries.
$memcache_exists = class_exists('Memcache', FALSE);
$memcached_exists = class_exists('Memcached', FALSE);
$memcache_services_yml = DRUPAL_ROOT . '/modules/contrib/memcache/memcache.services.yml';
$memcache_module_is_present = file_exists($memcache_services_yml);
if ($memcache_module_is_present && ($memcache_exists || $memcached_exists)) {
// Use Memcached extension if available.
if ($memcached_exists) {
$settings['memcache']['extension'] = 'Memcached';
}
if (class_exists(ClassLoader::class)) {
$class_loader = new ClassLoader();
$class_loader->addPsr4('Drupal\\memcache\\', DRUPAL_ROOT . '/modules/contrib/memcache/src');
$class_loader->register();
$settings['container_yamls'][] = $memcache_services_yml;
// Acquia Default Settings for the memcache module
// Default settings for the Memcache module.
// Enable compression for PHP 7.
$settings['memcache']['options'][Memcached::OPT_COMPRESSION] = TRUE;
// Set key_prefix to avoid drush cr flushing all bins on multisite.
$settings['memcache']['key_prefix'] = sprintf('%s%s_', $conf['acquia_hosting_site_info']['db']['name'], get_deployment_id());
// Decrease latency.
$settings['memcache']['options'][Memcached::OPT_TCP_NODELAY] = TRUE;
// Bootstrap cache.container with memcache rather than database.
$settings['bootstrap_container_definition'] = [
'parameters' => [],
'services' => [
'database' => [
'class' => 'Drupal\Core\Database\Connection',
'factory' => 'Drupal\Core\Database\Database::getConnection',
'arguments' => ['default'],
],
'settings' => [
'class' => 'Drupal\Core\Site\Settings',
'factory' => 'Drupal\Core\Site\Settings::getInstance',
],
'memcache.settings' => [
'class' => 'Drupal\memcache\MemcacheSettings',
'arguments' => ['@settings'],
],
'memcache.factory' => [
'class' => 'Drupal\memcache\Driver\MemcacheDriverFactory',
'arguments' => ['@memcache.settings'],
],
'memcache.timestamp.invalidator.bin' => [
'class' => 'Drupal\memcache\Invalidator\MemcacheTimestampInvalidator',
'arguments' => ['@memcache.factory', 'memcache_bin_timestamps', 0.001],
],
'memcache.backend.cache.container' => [
'class' => 'Drupal\memcache\DrupalMemcacheInterface',
'factory' => ['@memcache.factory', 'get'],
'arguments' => ['container'],
],
'cache_tags_provider.container' => [
'class' => 'Drupal\Core\Cache\DatabaseCacheTagsChecksum',
'arguments' => ['@database'],
],
'cache.container' => [
'class' => 'Drupal\memcache\MemcacheBackend',
'arguments' => [
'container',
'@memcache.backend.cache.container',
'@cache_tags_provider.container',
'@memcache.timestamp.invalidator.bin',
'@memcache.settings',
],
],
],
];
// Content Hub 2.x requires the Depcalc module which needs to use the database backend.
$settings['cache']['bins']['depcalc'] = 'cache.backend.database';
// Use memcache for bootstrap, discovery, config instead of fast chained
// backend to properly invalidate caches on multiple webs.
// See https://www.drupal.org/node/2754947
$settings['cache']['bins']['bootstrap'] = 'cache.backend.memcache';
$settings['cache']['bins']['discovery'] = 'cache.backend.memcache';
$settings['cache']['bins']['config'] = 'cache.backend.memcache';
// Use memcache as the default bin.
$settings['cache']['default'] = 'cache.backend.memcache';
}
}
}
#!/bin/sh
## Initiate a code and database update from Site Factory
## Origin: http://docs.acquia.com/site-factory/extend/api/examples
# This script should primarily be used on non-production environments.
# Mandatory parameters:
# env : environment to run update on. Example: dev, pprod, qa2, test.
# - the api user must exist on this environment.
# - for security reasons, update of prod environment is *not*
# supported and must be performed manually through UI
# branch : branch/tag to update. Example: qa-build
# update_type : code or code,db
source $(dirname "$0")/includes/global-api-settings.inc.sh
env="$1"
branch="$2"
update_type="$3"
# add comma to "code,db" if not already entered
if [ "$update_type" == "code,db" ]
then
update_type="code, db"
fi
# Edit the following line, replacing [domain] with the appropriate
# part of your domain name.
curl "https://www.${env}-[domain].acsitefactory.com/api/v1/update" \
-v -u ${user}:${api_key} -k -X POST \
-H 'Content-Type: application/json' \
-d "{\"sites_ref\": \"${branch}\", \"sites_type\": \"${update_type}\"}"
<?php
/**
* @file
*
* This post-settings-php hook is created to conditionally set the cache
* lifetime of Drupal to be a value that is greater than 300 (5 minutes).
* It also does not let you set it to be lower than 5 minutes.
*
* This does not fire on Drush requests, as it interferes with site creation.
* It also means that drush will report back incorrect values for the
* cache lifetime, so using a real browser is the easiest way to validate
* what the current settings are.
*
* How to enable this for a site:
* - drush vset acsf_allow_override_page_cache 1
* - drush vset page_cache_maximum_age 3600
*/
if (!drupal_is_cli()) {
$result = db_query("SELECT value FROM {variable} WHERE name = 'acsf_allow_override_page_cache';")->fetchField();
if ($result) {
$acsf_allow_override_page_cache = unserialize($result);
if ($acsf_allow_override_page_cache) {
$result = db_query("SELECT value FROM {variable} WHERE name = 'page_cache_maximum_age';")->fetchField();
// An empty array indicates no value was set in the database, so we ignore
// the site.
if ($result) {
$page_cache_maximum_age = (int) unserialize($result);
if ($page_cache_maximum_age > 300) {
$conf['page_cache_maximum_age'] = $page_cache_maximum_age;
}
}
}
}
}
<?php
/**
* @file
* Example implementation of ACSF post-settings-php hook.
*
* @see https://docs.acquia.com/site-factory/extend/hooks
*/
// Changing the database transaction isolation level from `REPEATABLE-READ`
// to `READ-COMMITTED` to avoid/minimize the deadlocks.
// @see https://docs.acquia.com/acquia-cloud-platform/help/93891-fixing-database-deadlocks
// for reference.
$databases['default']['default']['init_commands'] = [
'transaction_isolation' => 'SET SESSION transaction_isolation="READ-COMMITTED"',
];
if (file_exists('/var/www/site-php')) {
acquia_hosting_db_choose_active($conf['acquia_hosting_site_info']['db'], 'default', $databases, $conf);
}
If this content did not answer your questions, try searching or contacting our support team for further assistance.
If this content did not answer your questions, try searching or contacting our support team for further assistance.