On Cloud Platform, uploaded files are separate from the Drupal database
and your code repository. Cloud Platform deployment code creates a symbolic
link to your application’s
/files directory. If you use Drupal’s multisite
feature, Cloud Platform creates a separate
/files directory for each
settings.php file in the
Drupal codebases (including Drupal core, contributed modules, and custom code)
on Cloud Platform are managed using the Git version control system. Git
can manage text files full of code, but is not suitable for large collections
of user-uploaded objects. Cloud Platform stores your application’s
/files directories outside of your repository and manages them for you. This
simplifies your workflow and makes your repository smaller and more manageable.
To access your application’s
/files directories, use the
ssh commands. For more information, see
Disk Storage on Cloud Next.
Cloud Platform Enterprise applications run on high-availability infrastructure for their live production environments. For environments running on Cloud Next technologies, this applies to non-production environments, as well.
If the application is configured to use local file storage, files uploaded to
any one underlying application-layer node won’t be available to other nodes
handing traffic for your application. This is why Cloud Platform
Enterprise environments use shared file systems to ensure that data written to
/files directory is accessible to all infrastructure nodes running your
Production environments on Cloud Platform Enterprise applications run on multiple application-layer nodes simultaneously, ensuring high-availability of application infrastructure. Non-production environments running on Cloud Next technologies also utilize high-availability application-layer infrastructure. If an application is configured to use local file storage, files uploaded to one web node would not be available on another.
Cloud Platform Enterprise uses shared file systems to ensure that data your code
writes to a
/files directory is accessible on all web nodes running your
When you import or create your Drupal codebase, Cloud Platform creates symbolic
links to your public file directory. Every Drupal multisite website in your
account has its own
[docroot]/sites/default/files links to
[docroot]/sites/example.com/files links to
To ensure its privacy, the
/files-private directory is not symbolically
linked to your application’s
[docroot]. For private file handling, you can
either use the absolute path to
[docroot] or a relative path under
../acquia-files. For more information, see Setting the private file
Using extremely large files can cause problems on your application. For example, large images can increase page load times, causing a perceived performance issue for your visitors. Additional problems can include having files that are too large for your application’s Varnish® cache, and that uploading large files through Drupal can fail due to timeouts at various levels of the stack.
If, however, you need to use extremely large files, consider these resources and factors:
If you find that uploads of large files to Cloud Platform Enterprise are timing out or otherwise failing, see Correcting broken uploads on Cloud Platform Enterprise.
If you need to upload files greater than 256 MB in size, create a Support ticket to discuss having your application configured to allow larger uploads. This option is available only if you are entitled to create Support tickets.
Read Handling large files for best practices about image processing, file organization, and storage.
Drupal applications on Cloud Platform support downloading of files of
any size. However, large downloads require a correct
in the HTTP response in order to succeed. For any static file, the Apache
process in the Cloud Platform stack will provide the correct header. If the
download is large (1 GB or greater) and is dynamically generated (for example,
generated by a PHP script), the download is likely to fail unless a
Content-Length header is explicitly provided.
If you maintain an extremely large number of files in your application, it can have a substantial negative effect on performance and stability, especially if they are all contained in the same directory. Acquia has found that having over 2,500 files in any single directory in the files structure, or a total of 250,000 files across all directories for environments running on Cloud Classic technologies, can seriously impact your infrastructure’s performance and potentially its stability. If your application requires a large number of files, maintain them in multiple directories. For more information about files and performance, see Improving application performance Proactively organizing files in subfolders, and Optimizing file paths: Organizing files in subfolders, which includes scripts for migrating files into subdirectories in your file system.
For environments running on Cloud Next technologies, there is no practical limit to how much storage is available, although Acquia strongly recommends keeping environments under 1 TB in total file system size in order to ensure file copy operations and disaster recovery processes can both be completed in a timely manner, when required.
For environments running on Cloud Classic technologies, Cloud Platform uses Amazon Elastic Block Stores (EBS) for an application’s files directory.
Since EBS volumes can be no larger than 1 terabyte, Acquia recommends using Amazon’s S3 storage service if you require more than 1 terabyte of file storage. Amazon S3 is highly reliable and can scale to any size storage. For Drupal 7 applications, you can use the AmazonS3 module, which uses the Drupal file wrapper to send files directly to and from S3.
For more information, see Using external storage for files.
Files in your application’s file storage are not executable by Cloud Platform PHP processes.