This page describes the known issues in Cloud Next. For known issues that apply to Cloud Platform generally and Cloud Classic specifically, see Known issues in Cloud Platform and Known issues in Cloud Classic respectively.
Copy or import of databases fails
If the definition of a VIEW
contains the database name in its defining query, you cannot copy or import databases.
Workaround:
To remove any instances of the database name db-name-printed-here
from the VIEW
:
Confirm the database name as follows:
mysql> show databases; +----------------------------------+ | Database | +----------------------------------+ | information_schema | | db-name-printed-here | +----------------------------------+ 2 rows in set (0.02 sec)
Identify the views that contain references of the database name as follows:
mysql> SELECT TABLE_NAME AS VIEW_NAME,TABLE_SCHEMA AS DB_NAME FROM INFORMATION_SCHEMA.VIEWS WHERE VIEW_DEFINITION like '%db-name-printed-here%'; +-------------------------------------------+----------------------------------+ | VIEW_NAME | DB_NAME | +-------------------------------------------+----------------------------------+ | some-view-name | db-name-printed-here | +-------------------------------------------+----------------------------------+ 1 row in set (0.06 sec)
Review the
VIEW
and remove all occurrences of the database name from the SQL query as follows:mysql> SHOW CREATE VIEW some-view-name \G *************************** 1. row *************************** View: some-view-name Create View: CREATE ALGORITHM=UNDEFINED DEFINER=`s10628`@`%` SQL SECURITY DEFINER VIEW `some-view-name` AS select `db-name-printed-here`.`watchdog`.`wid` AS `wid`,`db-name-printed-here`.`watchdog`.`severity` AS `severity`,`db-name-printed-here`.`watchdog`.`message` AS `message` from `db-name-printed-here`.`watchdog` where (`db-name-printed-here`.`watchdog`.`type` = 'cron') character_set_client: utf8 collation_connection: utf8_general_ci 1 row in set (0.00 sec)
For more information on how to modify the VIEW
, read the ALTER VIEW Statement.
Process limit for Drupal, SSHD, Scheduled Tasks, and Cloud Hooks
In Cloud Next, you cannot have more than 5000 processes for Drupal, SSHD, scheduled tasks, and Cloud Hooks. If you exceed the process limit, the system restarts the service. This interrupts running requests and results in 50x errors for Drupal.
Interruption in web requests taking longer than 10 minutes
In Cloud Next applications, web requests that take longer than 10 minutes might be interrupted by routine platform maintenance activities.
Two daily backups
The Cloud Platform user interface may occasionally display two daily backup tasks for the same day, indicating the backups taken at different times of the day.
“End of script output before headers” error occurs if the HTTP response header exceeds 8 KB
Cloud Next introduces a limit of 8 KB to HTTP response headers. When using HTTP headers, ensure that the header size does not exceed this limit. For example, this limit might be triggered when you use:
- Acquia Purge module that is configured to output debug headers
- Security Kit (seckit) module that is configured to output
X-Content-Security-Policy
headers
File copy operation takes longer in Cloud Next compared to Cloud Classic
The file copy operation in Cloud Next takes longer as compared to Cloud Classic. This occurs because the files are first copied from the production environment to an intermediate ODE environment, and then to a migration environment. After the copy operation is complete, the system deletes the ODE environment, thereby keeping only the migration environment. This additional step in the migration process consumes more time.
Change in mod_headers behavior in Cloud Next
In Cloud Classic, mod_headers
directives in the .htaccess
file are ignored for PHP and Drupal requests, and are only applied to static files. However, in Cloud Next, mod_headers
directives in the .htaccess
file are applied. This might result in unexpected or unwanted changes in application behavior. Acquia recommends that you review your .htaccess
file for mod_headers
usage.
MySQL 5.7 features incompatibilities on Cloud Next
Cloud Next leverages AWS Aurora MySQL. A few of the MySQL 5.7 features are not supported on Cloud Next. For more information, see list of unsupported MySQL 5.7 features.
Scheduled jobs must not use hardcoded host names in log paths
Scheduled jobs or cron jobs in Cloud Next must not use hardcoded host names in log paths. However, if you use /shared/logs
as the directory, you can use hardcoded paths.
Limitations with scheduled jobs
When you create scheduled jobs in Cloud Next:
- You cannot set crons at frequencies of less than 5 minutes.
- Your cron task duration must not be more than 3 hours. If your cron job lasts longer than that, it terminates.
Limitations with Cloud Hooks
The maximum duration of Cloud Hook scripts is 3 hours. If your Cloud Hook script lasts longer than that, it terminates.
Cloud Hook scripts and all child processes have a maximum memory limit of 2000 MB. If these processes exceed the available memory, the Cloud Hook logs display messages such as
Killed
orExit status 137
.
Issues in connecting to Cloud Next environments with locally-installed MySQL Workbench
If you have MySQL Workbench installed locally, you might not be able to connect to Cloud Next environments. This issue occurs for a few versions of MySQL Workbench.
If you face issues connecting to Cloud Next environments from MySQL Workbench:
- Locate the database credentials listed on your Databases page.
Open Terminal on your local machine and run:
{ssh -L $LOCAL_PORT:$DB_HOST:3306 $SSH_STRING}
Here,
LOCAL_PORT
is the port to which Workbench must connect when using localhost.DB_HOST
is the hostname obtained from the Cloud Platform user interface.SSH_STRING
is the full connection string from the Cloud Platform user interface. For example,user@something
.
Code deployment in Cloud Next
Code deployment in Cloud Next can take a maximum of 1 hour. Environments on Cloud Next may intermittently experience code deployment times taking more than 5 minutes. Ensure that you close your SSH session before starting your code deployment. If you are in an SSH session and start code deployment, the process might fail.
Git repositories in Cloud Next must not exceed 2 GB. If your repository size exceeds this limit, code deployment tasks might fail without displaying any public logs. In such a case, you must verify that the combined size of all files in the specified Git branch or tag is less than 2 GB. For more information, visit Disk Storage in Cloud Next.
Unable to increase PHP file upload size
Currently, you cannot increase the PHP file upload size values beyond the limits available in the Cloud Platform user interface. The maximum size for Cloud Next is 1024 MB.
Workarounds: You can use either of the following workarounds:
- Use the contributed module, DropzoneJS, and specifically the chunked uploads patch in DropzoneJS. For more information, see Use contributed modules for file upload handling.
- Upload a small dummy file of the next version of your software through the Drupal user interface. After that, you can access the Acquia Cloud service through SSH or SFTP, push the actual file, replacing the original dummy file. This process enables you to reference the file in Drupal and utilize it visually according to your preferences in Drupal.
memcache_admin incompatibility
Cloud Next uses individual mcrouter instances on each pod. This obscures information such as cache hits or misses. The underlying platform architecture might cause to misreport metrics on each request.
Therefore, the memcache_admin
module does not correctly report the status of memcache instances. Cumulative statistics, available memory, and evictions are reported as zero. In addition, other statistics might be misreported.
Workaround:
To gather memcache statistics, run the following command in an SSH session:
acquia-memcache stats
This command displays statistics from the available memcache instances.
Watchdog logs do not work in SSH, Cloud Hooks, or scheduled tasks
Acquia plans to address this issue in a future release. To get a workaround for this issue, contact Acquia Support.
Issues occur if codebases contain a CHARSET or COLLATE name
The database copy, backup, and restore operations are updated to maintain compatibility between MySQL 5.7 and MySQL 8. This ensures that these operations continue to function as expected during database version upgrade of MySQL. This update modifies any data in your database from utf8mb4_0900_ai_ci
to utf8mb4_general_ci
during these operations. Acquia could not target the specific collation or charset. Therefore, if your database contains content with the utf8mb4_0900_ai_ci
collation, the system updates it to utf8mb4_general_ci
.
This update does not apply to database dumps and restores that are done manually with Drush. Therefore, such operations might fail when you copy them between databases on different MySQL versions.
Workaround:
If you have a database dump that you have exported manually and you cannot import it properly, you can convert the file by running a command similar to the following:
zcat EDIT_ME_your_db_dump_file.sql.gz | sed -e 's/CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci/CHARSET=utf8mb4 COLLATE=utf8mb4_general_ci/g' -e '/CHARSET=utf8mb4/!b' -e '/COLLATE/!s/CHARSET=utf8mb4/CHARSET=utf8mb4 COLLATE=utf8mb4_general_ci/' | gzip -f >EDIT_ME_new_database_backup_file.sql.gz
The preceding example command assumes that:
- You have a database backup file that is gzip-compressed in the current folder.
- You are running a Cloud Platform SSH connection or Cloud IDE.
Therefore, you must update the command depending on your requirements.