Since the TV show Silicon Valley brutally made fun of image recognition powered by Artificial Intelligence in the infamous hotdog-not-hotdog episode, I decided it was my mission to do something better than that. Who eats hot dogs anyway? I’m French, and at Scality ‘Eat well’ is one of our core values! I wanted an app that automatically sorted melons, delicious summer fruit with low calories, high water content and fiber! I contacted the folks at Machine Box to get their help and used Zenko to do some magic. My mission was to train an algorithm to automatically tag images of melons based on their kind (watermelon, cantaloupe, etc.) and store them in Zenko with metadata for later retrieval. “Not funny!” screamed my colleagues, but I hadn’t meant to be funny.
We manipulate and store lots of data without being able to efficiently search and retrieve them later on. Google Photos and other tools introduced automatic recognition of images to the consumer space but at the cost of losing control of data. The compromise that consumers can accept are often not acceptable for corporations. AI tools like Machine Box can automatically add useful metadata information to the content that is uploaded on your storage. With Zenko, such metadata gets indexed so you can quickly and easily search for the content you’re looking for. I prepared a demo exploring this workflow:
- Upload a set of images to Zenko via the S3 API
- The images are used as reference by the Machine Box TagBox for teaching
- Teach the TagBox application to recognize melon images and differentiate between watermelons, cantaloupes and honeydews
- Upload new images to Zenko via the S3 API – the ones we want Machine Box to analyze and tag
- Get the TagBox application to check that image directly via S3 and tag it with a melon type with a degree of confidence and some default built-in tags that Machine Box will recognize and return (i.e. “Food”, “Plant”, “Fruit”, etc.)
- Upload the Machine Box resulting metadata information to the object in Zenko via S3
- Use Zenko Orbit to browse the metadata and search the images for those that have a level of confidence > 0.8 that the image is an image of a watermelon.
It’s a lot easier if you just look at the demo video to understand the different phases of this integration example: upload via S3 API in Zenko, AI teach, AI check, metadata indexing and search.
The multi-cloud character of Zenko lets you use any of the public cloud providers (Amazon, Azure, Google), or on-prem on a NAS or local object storage. With the same S3-based code, switch from an on-prem to an Amazon-based workflow by just choosing the bucket you want to use (associated to an Amazon, Azure, Google, etc. location).
P.S. If you haven’t seen Silicon Valley episode, here is the famous scene.