Baking a multi-cloud RaspberryPi for DockerCon

About one month ago, I was walking around the office and saw this goofy pink toy microwave. It was just there for the team to take funny pictures. Should I say it was a big hit at the office parties? It started its life as a passion project and a demo of WordPress REST APIs and with […]

Written By Dasha Gurova

On April 23, 2019
"

Read more

Solve the challenges of large-scale data, once and for all.

About one month ago, I was walking around the office and saw this goofy pink toy microwave. It was just there for the team to take funny pictures. Should I say it was a big hit at the office parties? It started its life as a passion project and a demo of WordPress REST APIs and with DockerCon19 on the horizon,  we were thinking about how we could demonstrate Zenko to fellow developers at the event. We’ve decided it should be interactive and fun – and suddenly our pink oven photo booth received a new purpose.

Zenko is a multi-cloud controller that enables developers to manage active workflows of unstructured data. It provides a single, unified API across all clouds to simplify application development. The data is stored in standard cloud format to make the data consumable directly by native cloud apps and services. With the photo booth, our intention is to create the data that we will manage using Zenko.

Setting up the RaspberryPi

This is the list of what we needed to make photo booth:

  • Raspberry Pi (in this case a Raspberry 3 model B)
  • SD Card for the Raspberry Pi
  • Micro USB cable + power adapter 5V and 2A (to power the Raspberry)
  • Camera module for Raspberry
  • USB Hub
  • Pink toy microwave
  • 7 inch HDMI touch display
  • The decoration (yes, this is essential)

I also would like to mention that I ended up using wired access to the internet. LAN cable works infinitely better than wifi for a stable connection. The “Start” button is connected to the Raspberry Pi on the GPIO Pin 18 and the LED light on GPIO 7.

Install the Python dependencies

The operating system of choice is the latest version of Raspbian Stretch Lite. It was written to the SD card (32GB in this case, but it could be way smaller as all pictures backed up on the cloud by Zenko). I used Etcher to write the operating system on the card.

All the necessary libraries:

  • Python
  • Boto3 (AWS SDK for Python)
  • Picamera (package to use the camera)
  • GraphicsMagick (a tool to create gifs)

How the demo flows

Step 1

The LED light indicates “Ready” status after the Pi is booted or the previous session is finished. The script runs in an endless loop and launches at boot.

Step 2

After the “Start” button is pressed, the script is executed. The user is guided to get ready and the Pi Camera Module will take 4 pictures in a row.

Step3

All pictures are saved in the local directory at first. Using the GraphicsMagick tool animated gif is created.

gm convert -delay 1[delay between pictures] <input_files> <output_file>
Step 4

Next, the user is asked to enter their name and email. These two values will be used as metadata for the animated gif when uploading to Zenko.

Step 5

Upload the gif. Boto is the Amazon Web Services (AWS) SDK for Python. We create a low-level client with the service name ‘s3’ and the keys to a Zenko instance along with the endpoint. All this info is available on Orbit connected to the Zenko instance.

session = boto3.session.Session()

s3_client = session.client(
service_name='s3',
aws_access_key_id='ECCESS KEY',
aws_secret_access_key='SECRET KEY',
endpoint_url='ZENKO ENDPOINT',)

s3_client.put_object(Bucket='transfer-bucket',
Key=user_name,
Body=data,
Metadata={ 'name':user_name, 'email': user_email, 'event': 'dockercon19' })

When putting the object to Zenko using client there are few small details to keep in mind:

  • Key – is a string (not a file path) that will be the name to the object.
  • Body – is a binary string (that’s why there is a call to open()).
  • Metadata – key: value pairs to be added to the object.
  • “transfer-bucket” – is the name of the target bucket in Zenko.

This bucket is a transient source bucket and appears as “temporary” in Orbit. The “isTransient” location property is set through Orbit. It is used for low-latency writes to local storage before having CRR transitioning the data asynchronously to cloud targets (GCP, AWS, Azure).

Step 6

If everything went well while putting the current object to Zenko then preview mode will start and show the resulting gif to the user a couple of times. Instant gratification is important 😉

Our freshly created data is ready to be managed!

Some of  Zenko’s capabilities are:

  • Unified interface across Clouds
  • Data is stored in a cloud-native format
  • Global search using metadata
  • Policy-based data management
  • Single metadata namespace
  • Deploy-anywhere architecture

At this point, it is a good idea to check the animated gif in the Orbit browser and make sure that it was replicated to different cloud storage locations (I already have the rule in place that replicates the object to all 3 cloud locations). Maybe create some new rules on where to replicate object or when it expires. Have a peek at statistics: memory usage, replication status, number of objects, total data managed, archived vs active data. Use the global search across all managed data in Orbit.

Check out the repository with the code for the demo. Come see me at DockerCon19! Look for the Zenko booth and our pink oven photo booth.

If you cannot make it to DockerCon this year, I will be happy to chat or answer any questions on the forum. Cheers!

Simple, secure S3 object storage software for modern applications