The ultimate guide to object storage and IAM in AWS, GCP and Azure

The ultimate guide to object storage and IAM in AWS, GCP and Azure

Here is a brief overview of the architectural differences between AWS, GCP and Azure for data storage and authentication, and additional links if you wish to further deep dive into specific topics.

Working on Zenko at Scality, we have to deal with multiple clouds on a day-to-day basis. Zenko might make these clouds seem very similar, as it simplifies the inner complexities and gives us a single interface to deal with buckets and objects across all clouds. But the way actual data is stored and accessed on these clouds is very different.

Disclaimer: These cloud providers have numerous services, multiple ways to store data and different authentication schemes. This blog post will only deal with storage whose purpose is, give me some data and I will give it back to you. This means it addresses only object storage (no database or queue storage) that deals with actual data and authentication needed to manipulate/access that data. The intent is to discuss the key differences to help you decide which one suits your needs.

Storage

Each cloud has its own hierarchy to store data. For any type of object storage everything comes down to objects and buckets/containers. The below table gives a bottom-up comparison of how objects are stored in AWS, GCP and Azure.

Category AWS GCP Azure
Base Entity Objects Objects Objects also called blobs
Containers buckets buckets containers
Storage Class S3 Standard, S3 Intelligent-Tiering, S3 Standard-IA, S3 One Zone-IA, S3 Glacier, S3 Glacier Deep Archive Multi-Regional Storage, Regional Storage, Nearline Storage, Coldline Storage Hot, Cool, Archive
Region Regions and AZs Multi-regional Azure Locations
Underlying service S3, S3 Glacier Cloud Storage Blob Storage
Namespace Account Project Storage Account
Management Console, Programmatic Console, Programmatic Console, Programmatic

Keys

Following the traditional object storage model, all three clouds (AWS, GCP and Azure) can store objects. Objects are identified using ‘keys’. Keys are basically names/references to the objects with the ‘value’ being actual data. Each one has it’s own metadata engine which allows us to retrieve data using keys.  In Azure storage these objects are also called “blobs”. Any key that ends with a slash(/) or delimiter in case of AWS is treated as a PREFIX for the underlying objects. This helps in with grouping objects in a folder like structure and can be used for organizational simplicity.

Limitations:

  • AWS: 5TB object size limit with 5GB part size limit
  • GCP: 5 TB object size limit
  • Azure: 4.75 TB blob size limit with 100 MB block size limit

Containers

In object storage everything is stored under containers, also called buckets. Containers can be used to organize the data or provide access to it but, unlike a typical file system architecture, buckets cannot be nested.

Note that in AWS and GCP containers are referred to as buckets and in Azure they are actually called containers.

Limitations:

  • AWS: 1000 buckets per account
  • GCP: No known limit on a number of buckets. But there are limits for a number of operations.
  • Azure: No limit on the number of containers

Storage Class

Each cloud solution provides different storage tiers based on your needs.

AWS:

  • S3 Standard: Data is stored redundantly across multiple devices in multiple facilities and is designed to sustain the loss of two facilities concurrently with 99.99 % availability, 99.999999999% durability.
  • S3 Intelligent-Tiering: Designed to optimize costs by automatically transitioning data to the most cost-effective access tier, without performance impact or operational overhead.
  • S3 Standard-IA: Used for data which is accessed less frequently, but requires rapid access when needed. Lower fee than S3 Standard but you are charged a revival fee.
  • S3 One Zone-IA: Same as standard-IA, but data is stored only in one availability zone. It will be lost in case of an availability zone destruction
  • S3 Glacier: Cheap storage suitable for archival data or infrequently accessed data.
  • S3 Glacier Deep Archive: Lowest cost storage, used for data archival and retention which may be accessed only twice a year.

GCP:

  • Multi-Regional Storage: Typically used for storing data that is frequently accessed (“hot” objects) around the world, such as serving website content, streaming videos, or gaming and mobile applications.
  • Regional Storage: Data is stored in the same region as your google cloud dataPRoc. Has higher SLA than multi-regional (99.99%).
  • Nearline Storage: Available both multi-regional and regional. Very low-cost storage used for archival data or infrequently accessed data. There are high operation costs and data retrieval costs.
  • Coldline Storage: Lowest cost storage, used for data archival and retention which may be accessed only once or twice a year.

Azure:

  • Hot: Designed for frequently accessed data. Higher storage costs but lower retrieval costs.
  • Cold: Designed for data which is typically access once in a month. It has lower storage costs and higher retrieval costs as compared to hot storage.
  • Archive: Long term backup solution with the cheapest storage costs and highest retrieval costs.

Regions

Each cloud provider has multiple data centers, facilities and availability zones divided by regions. Usually, a specific region is used for better latencies and multiple regions are used for HA / geo-redundancy. You can find more details about each cloud provider storage specific region below:

Underlying service

AWS, GCP and Azure combined have thousands of services which are not just limited to storage. They involve and are not limited to compute, databases, data analytics, traditional data storage, AI, machine learning, IOT, networking, IAM, developer tools, migration, etc. Here is a cheat sheet that I follow for GCP. As mentioned before we are only going to discuss actual data storage services.

AWS provides Simple Storage Service(S3) and S3 Glacier, GCP uses its Cloud Storage service and Azure uses Blob storage. All these services provide massively scalable storage namespace for unstructured data along with their own metadata engines.

Namespace

Here is the place the architecture of each cloud deviates from each other. Every cloud has its own hierarchy. Be aware that we are only discussing the resource hierarchy for object storage solutions. For other services, this might be different.

AWS: Everything in AWS is under an “account”. In a single account there is one S3 service which has all the buckets and corresponding objects. Users and groups can be created under this account. An administrator can provide access to the S3 service and underlying buckets and the service to users and groups using permissions, policies, etc. (discussed later). There is no hard limit on the amount of data that can be stored under 1 account. The only limit is on the number of buckets which defaults to 100 but can be increased to 1000.

GCP: GCP’s hierarchy model is ‘Projects’. A project can be used to organize all your Google cloud services/resources. Each project has its own set of resources. All projects are eventually linked to a domain. In the image below, we have a folder for each department and each folder has multiple projects. Depending on the project requirements and current usage, the projects can use different resources. The image shows the current utilization of the resources of each project. It’s important to note that every service will be available for every project. Each project will have its own set of users, groups, permissions, etc. By default you can create ~20 projects on GCP, this limit can be increased on request. I have not seen any storage limits specified by GCP except for the 5TB single object size limit.

Graph credits

Azure: Azure is different from both GCP and AWS. In Azure we have the concept of storage accounts. An Azure storage-account provides a unique namespace for all your storage. This entity only consists of data storage. All other services can be accessed by the user and are considered as separate entities from storage accounts. Authentication and authorization are managed by the storage account.

A storage account is limited to storage of 2 PB for the US and Europe, 500 TB for all other regions, which includes the UK. A number of storage accounts per region per subscription, including both standard and premium accounts is 250.

Management

All cloud providers have the option of console access and programmatic access.

Identity and Access Management

Information security should ensure proper data flow and the right level of data flow. Per the CIA triad, you shouldn’t be able to view or change the data that you are not authorized to and should be able to access the data which you have right to. This ensures confidentiality, integrity and availability (CIA). The AAA model of security needs authentication, authorization and accounting. Here, we will cover authentication and authorization. There are other things that we should keep in mind while designing secure systems. To learn more about the design considerations I would highly recommend going through learning more about security design principles by OWASP and the OWASP Top 10.

AWS, GCP and Azure provide solid security products with reliable security features. Each one has its own way of providing access to the storage services. I will provide an overview of how users can interact with the storage services. There is a lot more that goes on in the background than what will be discussed here. For our purpose, we will stick to everything needed for using storage services. I will consider that you already have an AWS, GCP and Azure account with the domain configured (where needed). This time I will use a top-down approach:

 

Category AWS GCP Azure
Underlying Service AWS IAM GCP IAM AAD, ADDS, AADDS
Entities Users/groups per account users/groups per domain per project users/groups per domain
Authentication Access Keys / Secret Keys Access Keys / Secret Keys Storage Endpoint, Access Key
Authorization roles, permissions, policies Cloud IAM permissions, Access Control Lists(ACLs), Signed URLs, Signed Policy Documents domain user permissions, shared keys, shared access signatures
Required details for operations Credentials, bucket name, authorization Credentials, bucket name, authorization Credentials, storage account name, container name

Underlying Service

AWS: AWS Identity and Access Management(IAM) is an AWS web service that helps you securely manage all your resources. You can use IAM to create IAM entities (users, groups, roles) and thereafter provide them access to various services using policies. IAM is used for both authentication and authorization for users, groups and resources. In other clouds there can be multiple IAM services for multiple entities but in AWS for a single account there is only one point of authentication and authorization.

GCP: GCP IAM is similar to AWS IAM but every project will have its own IAM portal and its own setup if IAM entities (users, groups, resources).

Azure: Azure uses the same domain services as Microsoft and is known to have a very stable authentication service. Azure supports three types of services: Azure AD(AAD), active directory domain services(ADDS – used with windows server 2016, 2012 with DCPromo) and Azure active directory domain services(AADDS – managed domain services).

Azure AD is the most modern out of the three services and should be used for any enterprise solutions. It can sync with the cloud as well as on-premise services. It supports various authentication modes such as cloud-only, password hash sync + seamless SSO, pass-through authentication + seamless SSO, ADFS, 3rd party authentication providers. Once you have configured your AD, you use RBAC to allow your users to create storage accounts.

Entities

All cloud providers have the concept of users and groups. In AWS there is a single set of users and groups across an account. In GCP there is a single set of users and groups in every project. In Azure the users and groups depend upon how the domain was configured. Azure AD can sync all users from the domain or an admin can add users on the fly for their particular domain.

Authentication

Credentials is a way for the end-user to prove their identity. By now you might have figured out that the services that help us create users will also provide us access to the storage services. This is true in the case on AWS and GCP, but not for Azure.

For AWS and GCP their respective IAM services allow us to generate a pair of Access Key and Secret Key for any user. These keys can later be used by the users to authenticate themselves to use cloud services which include AWS S3 and GCP cloud storage. For Azure the authentication for the containers is managed by the storage account. When a storage account is created, it creates a set of keys and an endpoint along with it. These keys and the endpoint along or the domain credentials are used for authentication.

Authorization

Once a user has proved their identity, they need proper access rights to interact with the S3 buckets or GCP buckets or Azure containers.

AWS: In AWS this can be done in multiple ways. User can first be given access to S3 services using roles/permissions/policies and then on then can be given bucket level permissions using bucket policies or ACLs.  Here is a small tutorial on how can a user give permissions for an S3 bucket. There are many other ways you can access buckets, but it’s always good to use some kind of authentication and authorization.

GCP: In GCP every project has its own IAM instance. Similar to AWS, you can control who can access the resource and how much access they will have. For our use case, this can be done using Cloud IAM permissions, Access Control Lists(ACLs), Signed URLs or Signed Policy Documents. GCP has a very thorough guide and documentation on these topics. Here is the list of permissions that you might want to use.

Azure: Azure has a lot of moving pieces considering it uses Azure AD as the default authentication mechanism. For now, we will assume that you are already authenticated to AD and only need to access the resources inside a storage account. Every storage account has its own IAM which you can provide a domain user permissions to access resources under the storage account. You can also use shared keys or shared access signatures for authorization.

Required Details for Operations

Now that we have authentication and authorized to our storage services we need some details to actually access our resources. Below are the details required for programmatic access:

  • AWS S3: Access Key, Secret Key, Bucket name, region(optional)
  • GCP Cloud storage: Access Key Secret Key, Bucket Name
  • Azure: Storage Account name, Storage endpoint, Access Key, container name

 

This concludes my take on the key differences I noticed in a multi-cloud storage environment while working with the multi-cloud data controller, Zenko.

Let me know what you think or ask me a question on forum.

Deploy Zenko on Amazon EKS in 30 minutes

Deploy Zenko on Amazon EKS in 30 minutes

Do you have half an hour and an AWS account? If so, you can install Zenko and use Orbit to manage your data. Below is a step-by-step guide with time estimates to get started.

If you are an AWS user with appropriate permissions or policies to create EC2 instances and EKS clusters, you can dive into this tutorial. Otherwise, contact your administrator, who can add permissions (full documentation).

Initial Machine Setup (estimated time: 10 minutes):

For this tutorial, we use a jumper EC2 instance with Amazon Linux to deploy and manage our Kubernetes cluster. A power user can use their own workstation or laptop to manage the Kubernetes cluster.

Follow this guide to set up your EC2 instance and connect to your new instance using the information hereOnce connected to the instance, install applications that will help set up the Kubernetes cluster.

Install Kubectl, a command-line tool for running commands against Kubernetes clusters.

$ curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl

$ chmod +x ./kubectl
$ sudo mv ./kubectl /usr/local/bin/kubectl

Verify that kubectl is installed (expect a similar output):

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.2", GitCommit:"cff46ab41ff0bb44d8584413b598ad8360ec1def", GitTreeState:"clean", BuildDate:"2019-01-10T23:35:51Z", GoVersion:"go1.11.4", Compiler:"gc", Platform:"linux/amd64"}

Download aws-iam-authenticator, a tool to use AWS IAM credentials to authenticate to a Kubernetes cluster.

$ curl -o aws-iam-authenticator https://amazon-eks.s3-us-west-2.amazonaws.com/1.11.5/2018-12-06/bin/linux/amd64/aws-iam-authenticator

$ chmod +x ./aws-iam-authenticator
$ mkdir bin
$ cp ./aws-iam-authenticator $HOME/bin/aws-iam-authenticator && export PATH=$HOME/bin:$PATH

Install eksctl. eksctl is a simple CLI tool for creating clusters on EKS – Amazon’s new managed Kubernetes service for EC2.

$ curl --silent --location "https://github.com/weaveworks/eksctl/releases/download/latest_release/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp

$ sudo mv /tmp/eksctl /usr/local/bin

Configure AWS credentials:

$ mkdir ~/.aws
$ vim ~/.aws/credentials
$ cat ~/.aws/credentials
[default]
aws_access_key_id = AKIAII25IGOGWQITLYIQ
aws_secret_access_key = 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
$ export AWS_SHARED_CREDENTIALS_FILE=~/.aws/credentials

Verify credentials work. If the output looks similar, you are ready to launch your Kubernetes cluster:

$ eksctl get clusters
No clusters found

Deploy a Three-Node Kubernetes Cluster for Zenko: (estimated time: 10–15 minutes):

$ eksctl create cluster --name=zenko-eks-cluster --nodes=3 --region=us-west-2

Once you get the line below, your cluster is ready:

[✔]  EKS cluster "zenko-eks-cluster" in "us-west-2" region is ready

Install Helm:

$ curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > get_helm.sh 
$ bash ./get_helm.sh
$ helm version
Client: &version.Version{SemVer:"v2.12.3", GitCommit:"eecf22f77df5f65c823aacd2dbd30ae6c65f186e", GitTreeState:"clean"}

EKS requires role-based access control to be set up. The first step is to create a service account for Tiller:

$ kubectl create serviceaccount tiller --namespace kube-system

Create a Tiller service account: Make a rbac-config.yaml file and apply it.

$ cat rbac-config.yaml
apiVersion: v1
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
 name: tiller-role-binding
roleRef:
 kind: ClusterRole
 name: cluster-admin
 apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
 name: tiller
 namespace: kube-system

$ kubectl apply -f rbac-config.yaml
$ helm init --service-account tiller

Deploy Zenko (estimated time: 10 minutes)

Install Git:

$ sudo yum install git

Clone Zenko:

$ git clone https://github.com/scality/Zenko/

Go to the kubernetes folder and deploy Zenko. This will take about 10 minutes.

$ cd Zenko/kubernetes/
$ helm init
$ helm install --name zenko --set ingress.enabled=true \
--set ingress.hosts[0]=zenko.local \
--set cloudserver.endpoint=zenko.local zenko

Connect EKS Zenko to Orbit

Find the Instance ID to use for registering your instance:

$ kubectl logs $(kubectl get pods --no-headers=true -o \
custom-columns=:metadata.name | grep cloudserver-manager) | grep Instance | tail -n 1

{"name":"S3","time":1548793280888,"req_id":"a67edf37254381fc4781","level":"info","message":"this deployment's Instance ID is fb3c8811-88c6-468c-a2f4-aebd309707ef","hostname":"zenko-cloudserver-manager-8568c85497-5k5zp","pid":17}

Copy the ID and head to Orbit to paste it in the Settings page. Once the Zenko instance is connected to Orbit you’ll be able to attach cloud storage from different providers.

If you have any questions or want to show off a faster time than 30 minutes, join us at the Zenko forum.

Photo by chuttersnap on Unsplash

Hackathon Zenko x 42 Paris: présentation de l’équipe E2R5

L’équipe E2R5 (c’est la localisation « physique » du groupe au sein de l’école 42, le nom de la rangée dans les clusters de 300 postes où ils travaillent) regroupe 4 garçons (pas dans le vent) dont le projet (« votre mission, si vous l’acceptez… ») est de créer une sorte de micro-service pour Zenko. Plus précisément, ils doivent écrire un module pour rendre payantes (en cryptomonnaie) les actions qu’on peut réaliser via Zenko (lecture et écriture de données). Le fait de les tracer dans la blockchain permet par ailleurs d’augmenter la traçabilité des opérations, donc la confiance acheteur-vendeur.

Ce projet présente de nombreux avantages pour Guillaume, François, Cédric et Brice : il touche à la blockchain (le sujet dont tout le monde parle en ce moment), il leur permet de découvrir le javascript et de travailler sur le Web. Ils soulignent aussi que le format de ce hackathon les a séduit : il dure 5 jours au lieu de 2 habituellement et il doit déboucher sur du concret, en l’occurrence un prototype (« il va falloir pondre du code ») !

Ayant déjà eu l’occasion de participer ensemble à des événements de ce type, et issus de la même Piscine, ils n’ont pas de difficultés à travailler ensemble. Ils avancent pas à pas, en synergie malgré des tempéraments différents : Guillaume le « passe partout », François le « marrant » de la bande, Cédric le « néo » du groupe et Brice le « sorcier ».

Equipe E2R5 Zenko Scality Hackathon Paris

Guillaume a passé un Bac STI en électronique, puis enchaîné sur un DUT en informatique. Après une rapide tentative en sciences cognitives, il a travaillé quelques années dans le secteur du bâtiment pour finalement rejoindre l’école 42, curieux de voir de quoi il s’agissait et avant de constater qu’on pouvait y rencontrer des gens venant d’horizons très différents.

François a passé le même Bac que Guillaume et a aussi évolué dans le secteur du bâtiment pendant quelques temps. Puis un jour, l’occasion d’aller en Australie avec un ami se présente. Il part ! Il garde un très bon souvenir de cette époque, même si il a apprécié de rentrer en France après 6 mois d’absence. Alors réparateur de portables et de tablettes, il découvre l’école 42 sur le net.  Il essaye, c’est la révélation ! « Le code c’est la vie ! ».

Après avoir obtenu son Bac S, Cédric se lance dans études en maths et informatique ; puis il passe par l’alternance, touche à l’informatique de gestion, devient intérimaire quelques temps avant de partir un an en Australie et en Asie du Sud-Est. A son retour, sans idée précise de ce qu’il veut faire (toujours pas, précise t-il), peu séduit par les formations courtes en informatique qu’il trouve, il participe à la Piscine organisée en novembre 2016. Il n’a aujourd’hui aucun regret, au contraire.

Dernier « élu » du groupe, Brice. C’est celui qui a la plus grande culture informatique (notamment concernant l’open source). Il a un peu galéré après ses études avant de passer les tests de l’école 42 sur les conseils d’un ami et d’être retenu.

Aujourd’hui, ils constatent que le plus fou, depuis qu’ils sont à l’école 42, c’est que paradoxalement, ils n’ont jamais eu autant de liberté et autant travaillé.

Zenko


Fin de l’histoire: E2R5 a sorti un prototype fonctionnel en cinq jours, malgré la foultitude de technologies qu’ils ont eu à apprendre! Le jury a retenu la créativité de leur projet, mais ne les a pas classé dans le top 3. Cependant, en ayant réussi à instancier un SmartContract et à “hacker” le code de CloudServer pour y ajouter le faire communiquer avec ce contrat, ils se sont prouvé qu’ils étaient capables de beaucoup!

Hackathon Zenko x 42 Paris: présentation de l’équipe DNA

L’équipe DNA (p’tit nom constitué par l’initiale du prénom de ses trois membres, en l’occurrence Nate, Denis et Arnaud) a une sacrée mission : dans Zenko, accélérer et optimiser l’accès aux métadonnées. Actuellement stockées dans une LevelDB, ils vont essayer de les mettre dans une MongoDB.

Le choix de ce projet a été motivé par deux choses : le fait que les technologies utilisées étaient des technologies qu’ils connaissaient, et le fait qu’il leur donnait l’opportunité de mobiliser et d’approfondir les connaissances qu’ils avaient déjà acquises.

Nate, Denis et Arnaud se connaissent bien puisqu’ils ont déjà travaillé ensemble sur d’autres projets.

Team DNA Zenko 42 Hackathon

Chacun a un parcours atypique !

  • Nate est américain, fan d’analyses de données et qui a travaillé dans le secteur hospitalier à plusieurs reprises après son Master en biochimie ! Il a envie de relier les deux et, amoureux de la France et de Paris en particulier, il lui a semblé évident de venir à l’école 42 ! Une « bonne piste » comme il dit !
  • Denis a un peu touché à tout dans le cadre de ses études (Bac S, sciences économiques, marketing…). Ce bagage lui semblait suffisant pour pouvoir entreprendre et réussir mais il a été confronté à chaque initiative à un problème handicapant : il ne savait pas coder. Un vrai manque… Ayant entendu parler de l’école 42, geek depuis toujours, il n’a pas hésité : il s’est inscrit et a passé avec succès les différentes étapes de sélection.
  • Arnaud, le plus jeune du groupe, a entendu parler de l’école dès 2014. Peu convaincu au début, ce n’est que 2 ans plus tard, dans le cadre de ses études en informatique, qu’il s’est décidé à rejoindre l’école, déçu (et aussi surpris) par les méthodes d’enseignement traditionnelles.

Ils sont plutôt confiants !

Zenko


Fin de l’histoire: DNA a bel et bien réussi à intégrer MongoDB en backend métadonnées de Zenko, et fournit une jolie proof of concept, comme on dit. Leur code est consultable sur leur repo Git et, même s’ils n’ont pas gagné, ils ont convaincu le jury que leur piste était à explorer.

Hackathon Zenko x 42 Paris: présentation de l’équipe Ze Janitorz

Scality 42 hackathon team Ze JanitorzPeu importe si le nom de l’équipe ne plaît pas à Thomas. Sarah et Clémentine, elles, l’adorent ! Et certains diront « peu importe », c’est le projet qui compte !

Mais de quoi parle-t-on, au fait ? Il s’agit de créer une application dans Zenko permettant de voir où sont les données et si ce n’est pas plus intéressant de les mettre ailleurs (mais on peut aussi comparer les coûts concernant le déplacement des données et ceux concernant l’absence de déplacement ; quelques fois, il peut être plus intéressant financièrement de ne pas les déplacer). Le but ultime : optimiser les coûts de stockage et offrir une plus grande rapidité d’accès aux données (performance). L’équipe a également envisagé d’ajouter un 3ème pillier (un « cadeau bonus » ?) concernant la sécurité des données mais le temps risque de manquer.

Ce projet n’était pas proposé par Scality pour le hackathon. Mais au travers des questions de Thomas, lors de la présentation des autres projets, il est apparu pertinent d’explorer cette question du déplacement d’éléments entre les Cloud (qui n’existe pas pour l’instant). Sarah et Clémentine ont hésité au début, n’étant pas sûre d’avoir le niveau requis ni le temps nécessaire, mais la participation de Thomas plus expérimenté (cela fait 3 ans qu’il est à l’école), et la perspective d’être accompagnées les ont convaincues de se lancer !

Il y a encore un an, Sarah et Clémentine étaient loin de se douter qu’elles participeraient à ce hackathon. Pour les deux jeunes femmes, rejoindre l’école 42 a fait office de reconversion !

Sarah a travaillé pendant 1 an ½ pour le célèbre institut de sondage TNS Sofres mais n’a pas réussi à s’épanouir dans cette structure ancienne selon elle dans son organisation.

Clémentine était actuaire (oui, vous lisez bien) avant de rejoindre l’école 42 ; un métier (les maths appliqués à l’assurance) que peu de personnes connaissent. Elle ne se sentait pas suffisamment utile et elle a dû composer avec des méthodes de travail qui ne lui convenaient pas. Thomas, sachant qu’elle aimait la programmation, lui a suggéré de tenter la Piscine. Elle est désormais très heureuse !

Thomas a d’abord fait une école de commerce puis a intégré la seconde promo de l’école 42, estimant alors avoir la maturité suffisante pour se confronter au rythme sous-jacent de l’école (une liberté à 100 %). Actuellement développeur, et sans obligation de revenir à l’école, il n’a pourtant pas hésité à s’engager avec Sarah et Clémentine, en sachant qu’elles ne maîtrisaient aucune des technos à l’entrée du projet. Il est d’ailleurs bluffé par leur capacité d’apprentissage: elles se sont totalement approprié le projet !

Zenko


Fin de l’histoire: Ze Janitorz s’est placé en troisième position de ce hackathon, avec un projet qui ne retient pas l’attention pour son application business (au fil de l’étude de marché, ils se sont rendus compte que c’était trop cher de bouger les données; mieux vaut les mettre à différents endroits au début, et diriger le traffic selon les coûts!), mais pour son aboutissement technique. Le code est consultable ici et, si nos trois devs avaient déjà des stages pour le semestre prochain, ils espèrent pouvoir travailler chez Scality bientôt!