PHP 8.4 is compatible with Drupal 10.4 and higher, as well as Drupal 11.1 and higher¶
PHP 8.4 is only compatible with Drupal 10.4 and higher and Drupal 11.1 and higher. Drupal 7 is incompatible with PHP 8.4.
PHP 8.3 is compatible with Drupal 10.2 or higher ¶
PHP 8.3 is compatible only with Drupal 10.2 or higher. For more information, see https://www.drupal.org/docs/getting-started/system-requirements/php-requirements.
pcntl_fork() is not supported in live code and can cause stability issues with PHP 8 and later.¶
Using the pcntl_fork() function with PHP 8 and later in website code can leak PHP processes. This crashes the site slowly. Acquia recommends you to not use this function as it might be disabled eventually. However, you can use it in CLI PHP scripts.
The Cloud Platform notification API does not display real-time results. It typically returns a response within seconds, but sometimes can take several minutes to reflect the current state of the platform.
When inspecting the output of any successful or failed task, the final line is: zlib(finalizer): the stream was freed prematurely. This message can be ignored.
XML is not an acceptable file type in Support tickets¶
The acceptable file listing on the customer-facing ticket submission page lists XML as an acceptable file type, but XML files are not accepted for upload.
Workaround: Place the XML in a .zip file, and then attach the .zip file to the ticket.
The active domain for an environment cannot be changed¶
The active domain for an environment defaults to the first bare domain name, with or without the www prefix, and cannot be changed. Environments containing no top-level custom domains use the Acquia default domain as the active domain.
Environment variables starting with $ followed by a number are removed¶
When using environment variables in the Cloud UI, starting a sequence with $ followed by a number removes the first two characters.
For example, adding $23456 as an environment variable results in 3456.
Workaround: When using the environment variable in your code, manually prepend the first two characters to the variable. Acquia recommends validating your code to prevent any application issues when the issue is resolved.
Known issues with version control¶
Git checkouts fail after reporting non-existent changes to binary files¶
Git checkouts fail with the following error when non-text files contain a specific byte sequence and are not explicitly defined as binary files: Your local changes to the following files would be overwritten by checkout. Please, commit your changes or stash them.
Workaround: Edit the following section of your .gitattributes file:
# Auto-detect text files, ensure they use LF.
* text=auto eol=lf
and then make one of the following changes:
- Remove the line beginning with
*. - Remove the
eol=lf text. - Ensure that a specification line is added for every binary file extension used in your repository.
Cloud Platform doesn’t allow you to delete branches currently deployed to an environment. If you try to delete a branch from the command line, an error message like the following displays:
remote: Operation rejected: remote: Git branch or tag test_branch cannot be
deleted because it is currently deployed to testsite.prod
When attempting to push code, some users may encounter error messages like the following example if the root account owns a directory, instead of the siteuser account:
remote: error: insufficient permission for adding an object to repository
database ./objects
To correct the directory’s ownership settings, create a Support ticket.
Deployment keys must be associated with a user account¶
Cloud Platform does not support the use of deployment keys (machine keys) that are not associated with an individual user account. All SSH keys must be associated with a user account.
Workaround: Create a user account to associate the deploy key as described in Deployment keys and Cloud Platform.
Known issues with the Acquia Connector¶
Unable to log in through Acquia Connector¶
You might get a 404 error while logging in through Acquia Connector.
Workaround: You must assign the Access to legacy product keys permission to your role and re-authenticate yourself.
Non-hosted applications have access issue while using Acquia Connector with Search¶
In non-hosted applications, only users with admin privileges can click the Sign in with Acquia option for authentication to the search functionality. Non-admin user roles like team lead and developer cannot access this option.
Acquia Connector 8.x-1.17 causes error during cron runs¶
Upgrading to Acquia Connector 8.x-1.17 introduces the following error and may interfere with cron runs:
Call to a member function getEditable() on null
Workaround: Acquia recommends reverting to Acquia Connector 8.x-1.16.
See Upgrading to acquia_connector-8.x-1.17 can throw error “Call to a member function getEditable() on null” during cron runs for more information.
Acquia Connector requests can time out¶
Drupal websites connected to Acquia subscription, will send heartbeat requests during each cron run. By default, cron runs at the beginning of each hour, which can cause the Acquia subscription service to receive thousands of simultaneous connections across all subscriber websites, causing some requests to timeout.
Workaround: If your subscription service is experiencing timeouts, Acquia recommends you modify your cron runs to not begin at the start of the hour.
The Acquia Connector module conflicts with Remote stream wrapper module¶
The Acquia Connector module has a conflict with the Remote stream wrapper module, which is listed on the Modules to use with caution on Cloud Platform page.
Workaround: Apply this patch, available on Drupal.org.
Known issues with Drush¶
Using the Drush aliases from the Drush page in the Cloud Platform user interface might generate warnings with certain Drush commands if the site is running PHP 8.0.
dev:
root: /var/www/html/d9forcl57747.dev/docroot
ac-site: d9forcl57747
ac-env: dev
ac-realm: devcloud
uri: d9forcl57747abapnfzmsu.devcloud.acquia-sites.com
dev.livedev:
parent: '@d9forcl57747.dev'
root: /mnt/gfs/d9forcl57747.dev/livedev/docroot
host: d9forcl57747abapnfzmsu.ssh.devcloud.acquia-sites.com
user: d9forcl57747.dev
paths:
drush-script: drush9
The preceding block shows a sample Drush alias in the YAML format. If you run the drush uli command from your terminal, you might encounter the following warnings and deprecated messages:
$ vendor/bin/drush @default.dev uli
[warning] Undefined property: Drush\Commands\core\LoginCommands::$uri ExecTrait.php:38
[warning] does not appear to be a resolvable hostname or IP, not starting browser. You may need to use the --uri option in your command or site alias to indicate the correct URL of this site.
Deprecated: Required parameter $args follows optional parameter $command in /usr/local/drush9/vendor/drush/drush/includes/batch.inc on line 115
Deprecated: Required parameter $options follows optional parameter $command in /usr/local/drush9/vendor/drush/drush/includes/batch.inc on line 115
http://d9forcl57747.dev.acquia-sites.com/user/reset/1/1635526597/49RN0AE6RAINusSIxmSzRJgfoJmp3suVhdSFl2Vz2L0/login
The drush uli command works and returns the desired output even with the warnings. If these warnings block your development, remove the following segment from the yaml file:
paths:
drush-script: drush9
The current Drupal version is incompatible with certain Acquia Drush commands¶
The current Drupal version is not compatible with the following legacy commands:
You must use Composer to build websites running the current Drupal version. You cannot use Drush to update your Drupal websites.
Using Drush 9 may display error messages¶
If you are using Drush 9 in a Platform environment, Cloud Platform may display the following errors, such as:
Fatal error: require(): Failed opening required '/var/www/site-php//D8--[sitename]-settings.inc'Drush command terminated abnormally due to an unrecoverable error.
Workaround: To determine if this Drush 9-based behavior is affecting your use of the product, execute the following commands from a command prompt:
drush9 php-eval 'var_export($_ENV['AH_SITE_NAME'] , true);'
printenv 'AH_SITE_NAME'
These values must be equal. If they are not:
- Edit your website’s
settings.php file. - Before the Acquia require line, add
$_SERVER['PWD']=DRUPAL_ROOT;. - Save the
settings.php file.
Drush aliases are downloaded only for active on-demand environments¶
Drush aliases for Cloud Platform CD environments are downloaded only if the environment is active.
Do not install Drush in your home directory¶
Installing Drush in your user directory can cause unexpected task failures. Acquia recommends you require a site-local Drush as part of your codebase, not your home directory, and run your commands with a specific major version of Drush, as described in Using Drush on Cloud Platform.
An error message is displayed while running Drush 12 commands¶
Cloud Platform displays the following error message while running Drush 12 commands:
/bin/bash: /app/vendor/drush: No such file or directory
The Drush alias files that Cloud Platform generates are not compatible with Drush 12.
Workaround: Update the Drush alias files as follows:
paths:
drush-script: '/var/www/html/${AH_SITE_NAME}/vendor/bin/drush'
Known issues with external services¶
Legacy alerts from New Relic fail with the removal of TLS 1.0¶
Legacy alerts from New Relic fail with the removal of TLS 1.0 from Acquia systems. For help in updating your alerts, see: Introduction to alerts and applied intelligence.
Website duplication may cause rsync issues¶
Website duplication can cause rsync issues in the distributed file system which results in the following log error:
rsync warning: some files vanished before they could be transferred (code 24)at main.c(1183) [sender=3.1.1]
Could not rsync from [error]
Workaround: Create a Support ticket and specify the affected files listed in the rsync error.
The Cloud Platform CDN does not support the use of custom Varnish configurations (custom VCLs) with Cloud Platform-hosted applications. Attempting to do so can cause Cloud Platform Enterprise to experience conflicting caching behavior.
Other notable known issues¶
Self-service SSL certificates overwrite Acquia’s default certificate¶
When requesting the Acquia default domain, the subscriber’s self-service SSL certificate loads instead of the Acquia SSL certificate that covers the Acquia default domains. This behavior causes an SSL error in the browser. Install and activate two or more custom certificates on any affected environment to remove this error on the Acquia default domain.
Emergency-level logging broadcasts logged data to open SSH sessions¶
If a Drupal module logs information with the RFC 5424 severity of emergency, syslog broadcasts the information to open SSH sessions. These broadcasts can disrupt commands in progress, as indicated in the following example:
Workaround: Either modify the module’s logging function to decrease the message’s severity from emergency to another level, or disable the module’s logging feature.
The design of the Acquia platform can sometimes cause incompatibilities with Drupal contributed modules. Drupal modules not supported on Cloud Platform are listed in Modules and applications incompatible with Cloud Platform. Other modules usable on Cloud Platform with caution or special configuration are listed in Modules to use with caution on Cloud Platform.
You must rebuild caches after updating Drupal¶
Subscribers upgrading their websites to newer versions of the current Drupal version, must rebuild website caches after the upgrading. Unexpected errors and problems with some page displays may occur until you rebuild caches.
Running update.php on the current Drupal version can display error messages¶
Attempting to run update.php on the current Drupal version fails with the following error message:
In order to run update.php you need to either be logged in as admin or have
set $settings['update_free_access']
Workaround: Use one of the following methods:
Altering the $databases array causes connection errors¶
Websites altering the $databases array in settings.php to enable third-party database connections may experience connection errors when the connection setup is delayed in settings.php.
You cannot import a local website archive using the product interface¶
The Cloud Platform user interface does not support importing a website archive from a local file. Instead, you can import a site archive from a URL or import using Drush.
All Twig cache files for an environment are stored in a single directory¶
Clearing a website’s Twig caches clears the Twig caches for all websites hosted by that environment. For more information about how Cloud Platform stores Twig caches, see Twig caches.
Resolution for Site Factory subscribers: Upgrade your installed version of the Site Factory Connector module to either version 8.x-1.59 and later, or version 8.x-2.59 or later (for Drush 9 support).
Websites running the current Drupal version can have theme change issues with Twig caches¶
When you change themes and perform a code deployment on Cloud Platform applications running the current Drupal version, you might experience issues where cached Twig templates fall out of sync on different web infrastructure. The problem arises from having separate copies of the compiled Twig templates on each web infrastructure and a related Drupal core issue.
Workaround: When you make changes to themes in Cloud Platform Enterprise applications running the current Drupal version, connect to each web infrastructure, and run the following command to remove the outdated Twig templates:
drush @[sitename].[prod] --uri=http://[site_URL]/ ev '\Drupal\Core\PhpStorage\PhpStorageFactory::get("twig")->deleteAll();'
The AuthUserFile directive in .htaccess is not supported¶
The AuthUserFile directive in the Apache .htaccess file sets the name of a text file containing a list of users and passwords for user authentication. Cloud Platform does not support AuthUserFile, since its value must be either an absolute path or a path relative to the infrastructure root, and won’t work across different Cloud Platform environments.
The SymLinksIfOwnerMatch option in .htaccess is not supported¶
Due to the configuration of directory ownership and permissions for your Cloud Platform application’s codebase and files directories, use of the SymLinksIfOwnerMatch option in your application’s .htaccess file will prevent your web infrastructure from accessing any of the assets in your files directory. You must use the FollowSymLinks option instead.
Do not use the LOGNAME environment variable to determine sitegroup or sitename in your .htaccess file or in other custom scripts.
Workaround: Use the AH_SITE_NAME, AH_SITE_GROUP, or AH_SITE_ENVIRONMENT environment variables if you need environment-aware variables in your scripts.
Using mysqldump on MySQL 5.7 results in error¶
If you run mysqldump on MySQL 5.7, the following error occurs:
mysqldump: Error: 'Access denied; you need (at least one of) the PROCESS privilege(s) for this operation' when trying to dump tablespaces
Workaround: If you don’t need to dump the tablespace information, invoke mysqldump with the –no-tablespaces option.
Issue with inactive load balancer when using log forwarding¶
When log forwarding is configured with a destination that includes an IP allowlist, the Cloud Platform user interface might display an error message for the inactive load balancer, indicating a connection issue or incorrect configuration. This occurs because the inactive load balancer uses a dynamic IP address until it becomes active. However, log forwarding from the primary or active load balancer functions correctly.
Certain operations fail in PHP 8.4 for FedRAMP applications¶
PHP 8.4 includes OpenSSL 3.x, which has strict TLS security requirements. This impacts all operations that require SSL connectivity. For example, when you attempt to perform system backups in FedRAMP applications, you might get the following error:
AWS HTTP error: cURL error 35: TLS connect error: error:03000072:digital envelope routines::decode error
Workaround: If FIPS is enabled for your application, do not upgrade to PHP 8.4.
Download link for MySQL slow query logs does not work¶
You might not be able to download the MySQL slow query log through the Cloud Platform user interface.
Workaround: You can check the MySQL slow query log stream and note the queries that cause issues. When you optimize a few of such queries, the log size is reduced. Thereafter, you can retry downloading the MySQL slow query log.
If you delete a backup manually, the backup continues to appear in the list on the Databases page for about 24 hours.
When downloading database backups using the Cloud Platform API v2, users experience failures if an Elastic Load Balancer (ELB) is present.
Workaround: If the ELB is not in use, create a Support ticket and request to have the ELB removed.
In the legacy Cloud Platform interface, when a user selects Resize > Cancel, occasionally the Cancel button does not dismiss the dialog box.
Database backup downloads fail while using latest version of Chrome¶
For Chrome version 87 or later, mixed content downloads were disabled. Owing to this, database backup downloads initiated in the Cloud user interface fail in some cases.
Workaround: The Cloud Platform user interface displays the http version of the db backup download URL. You can specify this URL in a new browser window to restart the download.
Using Midnight Commander can cause file service interruptions¶
Cloud Platform Enterprise subscribers who use GNU Midnight Commander can experience service interruptions when trying to access their GFS mount. Acquia currently recommends that you do not use Midnight Commander with your Acquia-hosted websites.
After adding an access control list to your Varnish configuration file, you may not be able to download database backups through the Cloud user interface. It happens because while limiting access to your sites, your access control list also limits access to your Acquia default domain, such as example.prod.acquia-sites.com, that is required for database downloads to function.
To perform database backups:
- In the access control list of your VCL, add the IP addresses that can download backups.
- Perform database downloads through the Cloud API.
You cannot generate certificate signing requests (CSRs) for Node.js classic applications through the Cloud Platform user interface.
Workaround: Upload SSL certificates manually.
Multi-region failover does not support multisites¶
The Cloud Platform multi-region failover service supports only a single database per environment.
Issues occur with certain applications and methods after enabling FIPS in Cloud Classic applications¶
When you enable FIPS in a Cloud Classic environment, certain applications and methods might break. For example, Ruby might fail with the following error:
FIPS mode is enabled, bundler can't use the CompactIndex API
Workaround: Modify your application to use only FIPS-approved cryptographic methods and avoid using methods like md5 that are prohibited in FIPS. For more information about FIPS-approved cryptographic methods, see supported methods.
Stack Metrics in the Cloud Platform user interface might display incorrect storage value. This value might be different from what you see when accessing the platform through SSH.
Workaround: Use SSH to access /mnt/gfs to get the accurate data.