Getting started
Cloud Annotations makes labeling images and training machine learning models easy. Whether you’ve never touched a line of code in your life or you’re a TensorFlow ninja, these docs will help you build what you need. Let’s get started!
Sign up for IBM Cloud
Cloud Annotations is built on top of IBM Cloud Object Storage. Using a cloud object storage offering provides a reliable place to store training data. It also opens up the potential for collaboration, letting a team to simultaneously annotate the dataset in real-time.
IBM Cloud offers a lite tier of object storage, which includes 25 GB of free storage.
Before you start, sign up for a free IBM Cloud account.
Preparing training data
To train a computer vision model you need a lot of images. Cloud Annotations supports uploading both photos and videos. However, before you start snapping, there’s a few limitations to consider.
Training data best practices
-
Object Type The model is optimized for photographs of objects in the real world. They are unlikely to work well for x-rays, hand drawings, scanned documents, receipts, etc.
-
Object Environment The training data should be as close as possible to the data on which predictions are to be made. For example, if your use case involves blurry and low-resolution images (such as from a security camera), your training data should be composed of blurry, low-resolution images. In general, you should also consider providing multiple angles, resolutions, and backgrounds for your training images.
-
Difficulty The model generally can't predict labels that humans can't assign. So, if a human can't be trained to assign labels by looking at the image for 1-2 seconds, the model likely can't be trained to do it either.
-
Label Count We recommend at least 50 labels per object category for a usable model, but using 100s or 1000s would provide better results.
-
Image Dimensions The model resizes the image to 300x300 pixels, so keep that in mind when training the model with images where one dimension is much longer than the other.
-
Object Size The object of interests size should be at least ~5% of the image area to be detected. For example, on the resized 300x300 pixel image the object should cover ~60x60 pixels.
Set up Cloud Annotations
To use Cloud Annotations just navigate to cloud.annotations.ai and click Continue with IBM Cloud.

Once logged, if you don’t have an object storage instance, it will prompt you to create one. Click Get started to be directed to IBM Cloud, where you can create a free object storage instance.

You might need to re-login to IBM Cloud to create a resource.

Choose a pricing plan and click Create, then Confirm on the following popup.

Once your object storage instance has been provisioned, navigate back to cloud.annotations.ai and refresh the page.
The files and annotations will be stored in a bucket, You can create one by clicking Start a new project.

Give the bucket a unique name.

Object detection or classification?
A classification model can tell you what an image is and how confident it is about it’s decision. An object detection model can provide you with much more information:
- Location The coordinates and area of where the object is in the image.
- Count The number of objects found in the image.
- Size How large the object is with respect to the image dimensions.
If an object detection model gives us this extra information, why would we use classification?
- Labor Cost An object detection model requires humans to draw boxes around every object to train. A classification model only requires a simple label for each image.
- Training Cost It can take longer and require more expensive hardware to train an object detection model.
- Inference Cost An object detection model can be much slower than real-time to process an image on low-end hardware.
Object detection
After your bucket is created and named, it will prompt you to choose an annotation type. Choose Localization
, this enables bounding box drawing.

Labeling the data
- Upload a video or some images
- Create the desired labels
- Start drawing bounding boxes
Keyboard shortcuts
Functionality | Mac | Windows / Linux |
---|---|---|
Cycle active label | Q | Q |
Switch active label | 0 - 9 | 0 - 9 |
Next image | Right / Spacebar | Right / Spacebar |
Previous image | Left | Left |
Multiselect images | ⌘ + Click | Ctrl + Click |
Temporarily activate the ✥ tool |
Hold ⌘ | Hold Ctrl |
Classification
After your bucket is created and named, it will prompt you to choose an annotation type. Choose Classification
.

Labeling the data
- Create the desired labels
- Upload a video or some images
- Select images then choose
Label
>DESIRED_LABEL
Pro Tip: Upload images of the same class and use ⌘ + A (Ctrl + A on windows) to label all of the unlabeled images as the same label.
Keyboard shortcuts
Functionality | Mac | Windows / Linux |
---|---|---|
Select all images | ⌘ + A | Ctrl + A |
Expand selection | Shift + Click | Shift + Click |
Labeling with a team
To give someone access to your project, you need to set up an Identity & Access Management (IAM) policy.
Navigate to IBM Cloud.
From the titlebar, choose Manage
> Access (IAM)
.

Invite users
Invite the user, by choosing the Users
sidebar item and clicking Invite users
.

Enter their email address, then click Invite
.

Create an access group
For Cloud Annotations to work properly, the user will need:
- Operator platform access
- Able to view the service instance in Cloud Annotations
- Able to generate the credentials needed for training
- Writer service access
- Able to
View
/Upload
/Delete
files in object storage
- Able to
Create an access group, by choosing the Access groups
sidebar item and clicking Create
.

Give the access group a name.

Add the invited user to the access group by clicking Add users
.

Select the user from the list and click Add to group
.

Choose the Access policies
tab and click Assign access
.

Choose Cloud Object Storage
from the dropdown, this will enable the rest of the options.
For Service instance
, choose the Cloud Object Storage instance affiliated with you Cloud Annotation project.
For access, choose:
- Operator
- Writer
Followed by clicking Add
.

Once added, click Assign
.

Once assigned, the invited users should automatically be able to see the project in Cloud Annotations. To invite additional users, just add them to the access group you just created.
Uploading images/labels via API
Cloud Annotations is built on top of Cloud Object Storage (COS).
Any images located inside your bucket will be visible from the Cloud Annotation GUI.
Additionally, a file named _annotations.json
located at the root of your bucket is responsible for all annotation metadata.
For full COS documentation, see IBM Cloud Docs.
Example annotation file
The following is an example of the annotation file for an object detection project.
There is one image, image1.jpg
, with two bounding boxes (1 cat + 1 dog).
{
"version": "1.0",
"type": "localization",
"labels": ["Cat", "Dog"],
"annotations": {
"image1.jpg": [
{
"x": 0.7255949630314233,
"x2": 0.9695875693160814,
"y": 0.5820120073891626,
"y2": 1,
"label": "Cat"
},
{
"x": 0.8845598428835489,
"x2": 1,
"y": 0.1829972290640394,
"y2": 0.966248460591133,
"label": "Dog"
}
]
}
}
Note: The
_annotations.json
for classification projects will look identical minus the bounding box coordinates.
Uploading a file with curl
Retreive an access_token
from your IAM credentials:
curl -X POST "https://iam.cloud.ibm.com/identity/token" \ -d "response_type=cloud_iam" \ -d "grant_type=urn:ibm:params:oauth:grant-type:apikey" \ -d "apikey=APIKEY"
Upload a file:
curl -X PUT "https://s3.us.cloud-object-storage.appdomain.cloud/BUCKET/FILE_NAME" \ -H "Authorization: bearer ACCESS_TOKEN" \ -T "PATH/TO/A/FILE"
Object Storage SDKs and CLI
Exporting annotations via GUI
Documentation coming soon
Exporting annotations via API
Documentation coming soon
Training overview
Documentation coming soon
Training via GUI
Once you have labeled a sufficient amount of photos, click Train Model. A dialog message will appear, prompting you to select your Watson Machine Learning instance. If none are available, it will guide you to create a new one (You may need to refresh your Cloud Annotations window for the new instance to appear, but don’t worry, your labels will be saved).

Click Train. Your training job will not be added to the queue.
You will see it listed as pending until the training starts (this could take several minutes).

Once your training job starts, the status will change and you will see a graph of the training steps running.

Once the job is completed, you’re all set!
Installing the CLI
To train our model we need to install the Cloud Annotation CLI.
Homebrew (macOS)
If you are on macOS and using Homebrew, you can install cacli
with the following:
$ brew install cacli
Shell script (Linux / macOS)
If you are on Linux or macOS, you can install cacli
with the following:
$ curl -sSL https://cloud.annotations.ai/install.sh | sh
Windows
- Download the binary.
- Rename it to
cacli.exe
. cd
to the directory where it was downloaded.- Run
cacli --version
to check that it’s working.
(Optional) Add cacli.exe to your
PATH
to access it from any location.
Binary
Download the appropriate version for your platform from the releases page. Once downloaded, the binary can be run from anywhere. You don’t need to install it into a global location. This works well for shared hosts and other systems where you don’t have a privileged account.
Ideally, you should install it somewhere in your PATH
for easy use. /usr/local/bin
is the most probable location.
Training via CLI
Documentation coming soon
Training with Google Colab
Google Colaboratory, or “Colab” for short, is a product from Google Research. Colab allows anybody to write and execute arbitrary python code through the browser, and is especially well suited to machine learning, data analysis and education. More technically, Colab is a hosted Jupyter notebook service that requires no setup to use, while providing free access to computing resources including GPUs.
To use Google Colab all you need is a standard Google Account.
Note: These steps assume you have already labeled a dataset for object detection.
Exporting annotations
To train a model in Google Colab, it expects the annotations to be located in Google Drive. You can export your data from Cloud Annotations via the following steps:
- Choose
File
>Export as Create ML

Uploading to Google Drive
Once exported, you should have a file named <bucket-name>.zip
.
Unzip the downloaded folder and upload it to Google Drive.

Using Google Colab
Non-interactive training
Documentation coming soon
Custom training scripts
Documentation coming soon
Downloading a model via GUI
From an existing project, select Training runs > View all

Select a completed training job from the lefthand side, click Download. A zip file will be created containing your trained model files.
Downloading a model via CLI
To download a trained model with the Cloud Annotations CLI, simply use the cacli download
command.
cacli download <model-id>
Note The
model-id
can be obtained by runningcacli list
to list all available training runs.
Using a model
You can use a model by copying the downloaded model folder into one of the following demos. Further usage instructions are provided on the demo’s GitHub repo.
Classification demos
Object detection demos
Auto labeling
Documentation coming soon