This page describes the known issues in Cloud Next. For known issues that apply to Cloud Platform generally and Cloud Classic specifically, see Known issues in Cloud Platform and Known issues in Cloud Classic respectively.
Copy or import of databases fails
If the definition of a VIEW
contains the database name in its defining query, you cannot copy or import databases.
Workaround:
To remove any instances of the database name db-name-printed-here
from the VIEW
:
Confirm the database name as follows:
mysql> show databases;;; +----------------------------------+ | Database | +----------------------------------+ | information_schema | | db-name-printed-here | +----------------------------------+ 2 rows in set (0.02 sec)
Identify the views that contain references of the database name as follows:
mysql> SELECT TABLE_NAME AS VIEW_NAME,TABLE_SCHEMA AS DB_NAME FROM INFORMATION_SCHEMA.VIEWS WHERE VIEW_DEFINITION like '%db-name-printed-here%'; +-------------------------------------------+----------------------------------+ | VIEW_NAME | DB_NAME | +-------------------------------------------+----------------------------------+ | some-view-name | db-name-printed-here | +-------------------------------------------+----------------------------------+ 1 row in set (0.06 sec)
Review the
VIEW
and remove all occurrences of the database name from the SQL query as follows:mysql> SHOW CREATE VIEW some-view-name \G *************************** 1. row *************************** View: some-view-name Create View: CREATE ALGORITHM=UNDEFINED DEFINER=`s10628`@`%` SQL SECURITY DEFINER VIEW `some-view-name` AS select `db-name-printed-here`.`watchdog`.`wid` AS `wid`,`db-name-printed-here`.`watchdog`.`severity` AS `severity`,`db-name-printed-here`.`watchdog`.`message` AS `message` from `db-name-printed-here`.`watchdog` where (`db-name-printed-here`.`watchdog`.`type` = 'cron') character_set_client: utf8 collation_connection: utf8_general_ci 1 row in set (0.00 sec)
For more information on how to modify the VIEW
, read the ALTER VIEW Statement.
Process limit for Drupal, SSHD, Scheduled Tasks, and Cloud Hooks
In Cloud Next, you cannot have more than 5000 processes for Drupal, SSHD, scheduled tasks, and Cloud Hooks. If you exceed the process limit, the system restarts the service. This interrupts running requests and results in 50x errors for Drupal.
Interruption in web requests taking longer than 10 minutes
In Cloud Next applications, web requests that take longer than 10 minutes might be interrupted by routine platform maintenance activities.
Two daily backups
The Cloud Platform user interface may occasionally display two daily backup tasks for the same day, indicating the backups taken at different times of the day.
“End of script output before headers” error occurs if the HTTP response header exceeds 8 KB
Cloud Next introduces a limit of 8 KB to HTTP response headers. When using HTTP headers, ensure that the header size does not exceed this limit. For example, this limit might be triggered when you use:
- Acquia Purge module that is configured to output debug headers
- Security Kit (seckit) module that is configured to output
X-Content-Security-Policy
headers
File copy operation takes longer in Cloud Next compared to Cloud Classic
The file copy operation in Cloud Next takes longer as compared to Cloud Classic. This occurs because the files are first copied from the production environment to an intermediate ODE environment, and then to a migration environment. After the copy operation is complete, the system deletes the ODE environment, thereby keeping only the migration environment. This additional step in the migration process consumes more time.
Change in mod_headers behavior in Cloud Next
In Cloud Classic, mod_headers
directives in the .htaccess
file are ignored for PHP and Drupal requests, and are only applied to static files. However, in Cloud Next, mod_headers
directives in the .htaccess
file are applied. This might result in unexpected or unwanted changes in application behavior. Acquia recommends that you review your .htaccess
file for mod_headers
usage.
MySQL 5.7 features incompatibilities on Cloud Next
Cloud Next leverages AWS Aurora MySQL. A few of the MySQL 5.7 features are not supported on Cloud Next. For more information, see list of unsupported MySQL 5.7 features.
Scheduled jobs must not use hardcoded host names in log paths
Scheduled jobs or cron jobs in Cloud Next must not use hardcoded host names in log paths. However, if you use /shared/logs
as the directory, you can use hardcoded paths.
Limitations with scheduled jobs
When you create scheduled jobs in Cloud Next:
- You cannot set crons at frequencies of less than 5 minutes.
- Your cron task duration must not be more than 3 hours. If your cron job lasts longer than that, it terminates.
Limitations with Cloud Hooks
The maximum duration of Cloud Hook scripts is 3 hours. If your Cloud Hook script lasts longer than that, it terminates.
Cloud Hook scripts and all child processes have a maximum memory limit of 2000 MB. If these processes exceed the available memory, the Cloud Hook logs display messages such as
Killed
orExit status 137
.
Issues in connecting to Cloud Next environments with locally-installed MySQL Workbench
If you have MySQL Workbench installed locally, you might not be able to connect to Cloud Next environments. This issue occurs for a few versions of MySQL Workbench.
If you face issues connecting to Cloud Next environments from MySQL Workbench:
- Locate the database credentials listed on your Databases page.
Open Terminal on your local machine and run:
{ssh -L $LOCAL_PORT:$DB_HOST:3306 $SSH_STRING}
Here,
LOCAL_PORT
is the port to which Workbench must connect when using localhost.DB_HOST
is the hostname obtained from the Cloud Platform user interface.SSH_STRING
is the full connection string from the Cloud Platform user interface. For example,user@something
.
Code deployment in Cloud Next
Code deployment in Cloud Next can take a maximum of 1 hour. Environments on Cloud Next may intermittently experience code deployment times taking more than 5 minutes. Ensure that you close your SSH session before starting your code deployment. If you are in an SSH session and start code deployment, the process might fail.
Git repositories in Cloud Next must not exceed 2 GB. If your repository size exceeds this limit, code deployment tasks might fail without displaying any public logs. In such a case, you must verify that the combined size of all files in the specified Git branch or tag is less than 2 GB. For more information, visit Disk Storage in Cloud Next.
Unable to increase PHP file upload size
Currently, you cannot increase the PHP file upload size values beyond the limits available in the Cloud Platform user interface. The maximum size for Cloud Next is 1024 MB.
Workarounds: You can use either of the following workarounds:
- Use the contributed module, DropzoneJS, and specifically the chunked uploads patch in DropzoneJS. For more information, see Use contributed modules for file upload handling.
- Upload a small dummy file of the next version of your software through the Drupal user interface. After that, you can access the Acquia Cloud service through SSH or SFTP, push the actual file, replacing the original dummy file. This process enables you to reference the file in Drupal and utilize it visually according to your preferences in Drupal.
memcache_admin incompatibility for FedRAMP customers
For FedRAMP customers, Cloud Next uses individual mcrouter instances on each pod. This obscures information such as cache hits or misses and might misreport metrics on each report. Therefore, the memcache_admin
module does not correctly report the status of memcache instances. Cumulative statistics, available memory, and evictions are reported as zero. In addition, other statistics might be misreported.
Workaround:
To gather memcache statistics, run the following command in an SSH session:
acquia-memcache stats
This command displays statistics from the available memcache instances.
Drupal watchdog logs not captured for SSH, Cloud Hooks, or Scheduled Jobs
Tasks executed through SSH, Cloud Hooks, or Scheduled Jobs do not run the syslog service. Logging done through the Drupal Logging API, also known as Watchdog, is only sent to the standard output and standard error channels, and not to the drupal-watchdog logs that can be downloaded from the Cloud Platform user interface or Cloud API.
If an application uses the Log Forwarding feature, this same logging is not sent to a log forwarding destination.
Issues occur if codebases contain a CHARSET or COLLATE name
The database copy, backup, and restore operations are updated to maintain compatibility between MySQL 5.7 and MySQL 8. This ensures that these operations continue to function as expected during database version upgrade of MySQL. This update modifies any data in your database from utf8mb4_0900_ai_ci
to utf8mb4_general_ci
during these operations. Acquia could not target the specific collation or charset. Therefore, if your database contains content with the utf8mb4_0900_ai_ci
collation, the system updates it to utf8mb4_general_ci
.
This update does not apply to database dumps and restores that are done manually with Drush. Therefore, such operations might fail when you copy them between databases on different MySQL versions.
Workaround:
If you have a database dump that you have exported manually and you cannot import it properly, you can convert the file by running a command similar to the following:
zcat EDIT_ME_your_db_dump_file.sql.gz | sed -e 's/CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci/CHARSET=utf8mb4 COLLATE=utf8mb4_general_ci/g' -e '/CHARSET=utf8mb4/!b' -e '/COLLATE/!s/CHARSET=utf8mb4/CHARSET=utf8mb4 COLLATE=utf8mb4_general_ci/' | gzip -f >EDIT_ME_new_database_backup_file.sql.gz
The preceding example command assumes that:
- You have a database backup file that is gzip-compressed in the current folder.
- You are running a Cloud Platform SSH connection or Cloud IDE.
Therefore, you must update the command depending on your requirements.
Known issues with MySQL 5.7 to MySQL 8.0 upgrades
I use the tx_isolation and tx_read_only variables in MySQL 5.7. Are these variables supported in MySQL 8.0?
No, the MySQL transaction variables, tx_isolation
and tx_read_only
are supported in MySQL 5.7 but are not supported in MySQL 8.0. Therefore, if you continue to use the legacy variables after you upgrade your application to MySQL 8.0, you might encounter the following errors:
Drupal might fail to connect to the database and log an error message. For example,
SQLSTATE[HY000]: General error: 1193 Unknown system variable 'tx_isolation'
If you use Drush to connect to Drupal, you might get different error messages based on your use case. For example,
SQLSTATE[HY000]: General error: 1193 Unknown system variable 'tx_isolation
or
Failed to connect to any database servers for database [DB UUID]
Workaround:
Replace the legacy variables, tx_isolation
and tx_read_only
with their aliases, transaction_isolation
and transaction_read_only
in your application code. These aliases are supported in MySQL 8.0 as well as MySQL 5.7. For example, if your settings.php
file has the following:
$databases['default']['default']['init_commands'] = array( 'isolation' => "SET SESSION tx_isolation='READ-COMMITTED'", );
You must update it as follows:
$databases['default']['default']['init_commands'] = array( 'isolation' => "SET SESSION transaction_isolation='READ-COMMITTED'", );
Can I use Fast 404 module in sites that run MySQL 8.0 or later?
No, Fast 404 is incompatible with MySQL 8.0 and later. This impacts only the 7.x versions of Drupal and Fast 404.
If your sites use this module after they are upgraded to MySQL 8.0 and later, such sites might encounter degraded functionality and the following errors:
SQL errors: This occurs because of deprecated or incompatible SQL syntax. For example,
SQLSTATE[42000]: Syntax error
oraccess violation: 1064
.Broken 404 pages: This occurs as the system displays blank pages and error messages because of missing files or inaccurate 404 pages.
Site instability: This causes unstable websites because of a large number of 404 errors. Such errors are caused by missing files, images, or bots, and can trigger memory use, "white screen of death" (WSOD), or inaccessible pages.
Performance degradation: This occurs because the site becomes non-responsive as 404s are not efficiently handled.
Inability to uninstall modules: After you upgrade to MySQL 8.0, you might not be able to update or uninstall this module correctly.
Workaround:
Uninstall or disable Fast 404 before upgrading to MySQL 8.0 and later.
Implement the code changes mentioned in the Fast 404 pages section in the Drupal core default.settings.php file.
Can I use certain reserved keywords in MySQL 8.0?
If your database or SQL queries use certain reserved keywords as unquoted column names, table names, or aliases, your code will fail in MySQL 8.0 with SQL syntax errors. For example:
RANK
SYSTEM
WINDOW
Such failures result in application features, deployments, or database migrations to break or become unavailable after the upgrade.
For example, the following SQL queries break in MySQL 8.0:
SELECT id, rank FROM mytable WHERE rank = 1;
SELECT * FROM system WHERE id = 1;
SELECT window FROM mytable;
Workaround:
Use backticks (`) to quote reserved words:
SELECT id, `rank` FROM mytable WHERE `rank` = 1;
SELECT * FROM `system` WHERE id = 1;
SELECT `window` FROM mytable;
Solution:
Review the list of keywords new in MySQL 8.0 to identify any that may conflict with your existing database tables, columns, or aliases. For more information, visit MySQL 8.0 New Keywords and Reserved Words.
Audit your custom and contributed SQL queries, and your database schema, for use of these reserved words as unquoted identifiers.
Enclose these identifiers in backticks, or rename fields and tables to avoid reserved words.
Make these changes before upgrading to MySQL 8.0 to prevent SQL errors and application disruptions.