Building my own Cloud Storage

Building my own Cloud Storage

Is your free 15GB of Google Drive storage full? Tired of constantly clearing storage or creating multiple Gmail accounts? I got you covered.

A Bit of Context

As a CNCF Ambassador and an organiser of Cloud Native Hooghly, I recently organised a meetup for my cloud native chapter. After the event, the event management and photography teams wanted to upload the photos to cloud storage. This way, everyone can easily access the photos from their devices.

However, my team ran into an issue – their Google Drive accounts were full. This created a bottleneck in the upload process, causing frustration for everyone.

That same day, I explored paid plans for Google Drive and OneDrive, but funding them proved challenging. Undeterred, I decided to build my own solution. My research revealed several tutorials and documentation confirming the feasibility of this approach.

The Pathways

So, To make things possible, there are possibly two options:

  1. Make your own server, at home (Ain't Kidding, For real)

  2. Use a Cloud Provider (Reduces costs, SIGNIFICANTLY!)

Let's talk about the Option #1

This involves hardware components like :

  • RaspberryPi (To be used SSH-ed as the Headless Linux Device)

  • External SSD (Serving as the Network Attached Storage/NAS)

  • Router connected to Ethernet (To stay connected <3 )

This has few excellent advantages such as not sharing your critical data with any cloud providers or services and anything that is changed or added, stay in your premises. And the best part is, Isn't it fun to have a storage server in your bedroom itself? I would call it the biggest flex.

But, there's a catch. You need to make sure that you are not toasting the RaspberryPi while running it whole day. Also, you need to keep it up and running 24/7.

With that being said...

Coming to Option #2

That is, going with a cloud provider, supposedly AWS.

Meanwhile my readers : "What? Ughhhhhh! It doesn't make sense."

But trust me, it does. Sip some beer/coffee and lets jump into the tutorial.

The "How-to" part

FileCloud's Entry

Log on to https://ce.filecloud.com/ and create a free account to get the FileCloud Community Edition Licence for a year.

AWS Charisma

We need three AWS tools to create the infrastructure:

  • EC2

  • S3

  • IAM

Here, EC2 machine runs the service and provides the storage.

But wait! I mentioned earlier that we are reducing costs here.

Right?..... Right?

Here comes S3 into the picture. We are using the Object Storage instead of the in-built VM's storage and also it gives us flexibility in terms of storage space.

Did I just say "unlimited" storage? Well technically, yeah. AWS bills you according to your usage.

Now here are the steps:

  • Create an EC2 instance and make sure you choose "FileCloud" AMI from the Marketplace. This automatically sets the environment and gives you a t2-medium type virtual machine.

  • Create a new Key Pair for this project. This will be downloaded automatically, so that you can access the VM from your local machine's Terminal.

  • After the EC2 Instance is up and running, copy the Public IPv4 DNS and paste it on your browser.
    HOLA! Now you are able to see the login page.

  • Put a /admin route beside the URL. It may look like the same window, but it is not.

  • Login using this portal.

    • Username : admin

    • Password : EC2_INSTANCE_ID

  • Now, it will ask for the Licence, that we already have if you followed the steps correctly.

  • Great! We are in. Now, let's move our storage mount from EBS to S3.

  • Create an S3 Bucket using AWS Console.

  • Create an IAM using the following Policy JSON

{
          "Version": "2012-10-17",
          "Statement": [
                            {
                               "Effect": "Allow",
                               "Action": [
                                     "s3:CreateBucket",
                                     "s3:DeleteObject",
                                     "s3:GetObject",
                                     "s3:ListBucket",
                                     "s3:PutObject"
                               ],
                                "Resource": [
                                     "arn:aws:s3:::*"
                              ]
                          }
                      ]
}
  • Now SSH into the VM to enable S3 bucket configuration from the Terminal

  • Use the following command to edit the file

sudo vim /var/www/html/config/cloudconfig.php

Now try to find the comment "STORAGE IMPLEMENTATION : local, openstack, amazons3"

Now, change the local to amazons3 and save the changes.

And the last thing we need to do in the terminal is to make a copy of amazons3 config files, using this command :

sudo cp /var/www/html/config/amazons3storageconfig-sample.php /var/www/html/config/amazons3storageconfig.php

Now, the Amazon S3 settings will be available on the FileCloud Dashboard.

  • Create Access Key in Security Credentials under "3rd Party Services" using the same IAM user we created early. This will be serving as the authentication token between AWS S3 and FileCloud.

  • Copy and Paste the S3 Bucket Key and S3 Bucket Secret from AWS to FileCloud Dashboard's Storage Settings. Also, mention the desired bucket name.

  • You are good to go!

The Finishing Touch

To make things more lucrative and easy, you can map a custom Domain Name to this service and use it for your work.

In short, I used Cloudflare for handling the DNS Management and SSL/TLS.

Now it looks something like this -->

Huhhh! That was a lot of task. I believe my teammates at Cloud Native Hooghly will love this homegrown solution. So that, they can enjoy cloud storage like they never did.

Huge shout out to NetworkChuck for the amazing tutorial video.

Thanks for reading till the very end.

Follow me on Twitter, LinkedIn and GitHub for more amazing blogs about Tech and More!