Google Cloud Storage is an object storage service that allows you to upload files to a virtual compartment, providing quick and easy file storage for your applications. It competes with AWS S3 storage service in terms of price and features.
How much does GCP Cloud Storage cost?
Overall, GCP Cloud Storage is offered at a price similar to AWS S3. There are several different storage classes with different prices; the following prices are based on the us-east1 region, one of the largest (and cheapest) regions:
Standard storage costs $ 0.020, used for storing general purpose files.
Nearline storage fresh $ 0.010, used for infrequently accessed data, with a minimum of 30 days and additional costs to access the data.
Coldline storage fresh $ 0.004, used for data that is not often consulted (about once per quarter)
Archive storage fresh $ 0.0012, used for long-term archiving. It has a minimum storage policy of one year and high costs for data recovery. However, unlike AWS Glacier Deep Archive, your data is accessible in milliseconds, over hours or days.
You can also choose to distribute your data over several regions. This improves redundancy, but the main reason why you want it is to reduce the access latency for content accessible to the end user. Having multiple copies of your data in many different places means that the average latency for any user will be low.
Sure, storing data in multiple locations is more expensive, but not as much as you might think: for the entire U.S. region, standard storage costs $ 0.026 per GB, compared to $ 0.020 for the US-East region1 . Even if you only use one region, your data is still stored in several availability zones for the lowest possible redundancy and internal latency. With multi-regional deployments, you don’t store copies in each AZ, so the costs are relatively similar.
Creation of a compartment
In the GCP console, search for “Storage” in the sidebar and click on “Browser”:
From there, you can create a new compartment or modify your existing ones.
Give it a name, which must be unique in the world.
You have a few options for the location. The default is multi-region, which spans a large area and will provide the best performance to end users. If you only access data from one region, the option for only one region is cheaper. The dual region is much more expensive than the two, and is only useful for HA deployments where low latency for accessing the region is key.
Choose the default storage class for the bucket. If you download data and do not specify a specific class, it will by default be the one you choose here. You can, of course, have Standard and Nearline objects in the same compartment.
The following option controls the level of access to each object. If the entire compartment is used for the same purpose, such as a publicly accessible image compartment, you can set it to uniform to simplify access. Otherwise, leave it on Fine-Grained. There is no difference in price.
Click create and you should see a new compartment in the list.
If you want to download items to test it, you can do it from the console:
However, this will not be the way you access it most of the time. If you want to access it from the command line, you will need to install gsutil, a Python utility for accessing Cloud Storage. It is installed by default on Compute Engine instances, but if you want to access it from your personal computer or another machine, you will need to install the Google Cloud SDK:
curl https://sdk.cloud.google.com | hit
Then run gcloud init to link your account:
This will give you a link that you can open in your browser to choose your Google account.
Once your account is linked, you should be able to download items with gsutil cp:
gsutil cp file.txt gs: // bucket-name
If you are migrating from S3, Google provides a tool to easily move your data to the new compartment.