top of page

Welcome to!

We are excited to welcoming you to the platform. Before you start, we would like to explain a few basic things about the world of AI and the platform itself.

If you already have experience with zerocodeai or AI, just skip this post and get started right away.


You need some help to discuss your business case?

Contact our experienced
Sales Team now


Anker 1

The best way to get to know the platform is to just start. Of course, it makes a big difference if you're a student experimenting with new ideas or an enterprise leader who needs to introduce a new AI First culture. Even for this case we have written a chapter in which you will get more valuable learnings.





Start with AI Concepts

Before starting with anything else, there are a few AI concepts you need to be familiar with.

Continue with Deep Learning problem types

You can solve many things with deep learning, but not everything. These are the problem types you can solve with

Learn more about Data for Deep Learning 

Every deep learning project starts with having good data. In this article, we’ll give you a broad overview of what to consider when creating a dataset.

Check out the workflow

All deep learning project follows the same steps, and is built to follow this proven structure.

Quick Overview

AI, ML and DL are often used interchangeably, but they do not mean the same thing.

Artificial Intelligence (AI) is the science and engineering if building intelligent machines.

Machine Learning (ML) is a subset of AI. Machine Learning algorithms allow computers to solve problems using data as examples instead of coding an explicit set of rules, as in traditional software development.

Deep Learning (DL) is a type of machine learning capable of working with complex, unstructured data like text or images. But it works great for many use cases based on structured tabular data. DL learn to both represent data and make predictions. Deep Learning is what you do at


Datasets All data on the platform are clustered into datasets.

The datasets can consist of different kinds of data:

Text data For example: chat messages, tweets, customer feedback, lyrics, books etc.

Tabular data For example: sales data, customer info, sensor data, etc.

Image data For example: photos, satellite images, heat maps, x-ray, etc.

Audio data voice files like Conversation recordings

Features All datasets have features that have a feature encoding which determines the way example data is turned into numbers that a model can do calculations with. For example: Categorical with labels, numeric, or text.

Subsets Each dataset is split into subsets on the platform. By default, one subset is used for training your model, and one is used to validate your model. 

The platform can help you solve the following problem types:

Classification is the science and engineering if building intelligent machines.

Single-label classification When an example belongs to only now class and have only one label. For example cat or dog.

Multi-label When an example belongs to many classes and can have many labels. For example a day can be both rainy and cloudy.

Regression When you want to predict a number. For example: The revenue of a shop.

Similarity Find similar images or pieces of text. For example: Similar images of shoes.

These are the problem types you can solve on right now. Note that you can use multiple types of input data. That is both tabular and image data to solve a regression problem.

Screenshot 2021-12-27 at 12.43.52.png

Tabular regression

Predict a number based on tabular data



Price predictions

Sales forecast

Screenshot 2021-12-27 at 12.55.23.png

Image segmentation

Segment out certain parts on an image


Highlight abnormalities in medical imagery 

Landscape health monitoring

Screenshot 2021-12-27 at 12.45.42.png

Tabular classification

Assign a label to tabular data


Customer purchase prediction


Screenshot 2021-12-27 at 12.43.52.png

Image classification

Assign a label to an image


Quality assurance & Defect detection

Automatic tagging / labeling 

Screenshot 2021-12-27 at 12.47.53.png

Text similarity

Predict a number based on tabular data



Price predictions

Sales forecast

Screenshot 2021-12-28 at 09.16.37.png

Image regression

Predict a number based on image data


Precision farming 

Optimizing plant production 

Screenshot 2021-12-27 at 12.48.41.png

Text classification

Assign a label to a piece of text.


Find similar customer questions & answers.

Content moderation

Screenshot 2021-12-27 at 12.58.19.png

Image similarity

Find similar images


Visual search

Automatic tagging

Screenshot 2021-12-27 at 13.14.51.png

Text regression

Predict a number based on text


Grade texts based on how well-written they are , or automatically suggest a review rating.

Screenshot 2021-12-27 at 12.59.14.png

Audio files

Convert to spectrogram and use it as an image.


Audio analysis for industrial maintenance


Figuring out how much data you need is not an exact science or number (unfortunately), but below is a framework of how you could think when building your dataset.

You need to figure out how complex your data is if the data covers everything it needs to cover and if you got enough data for all features in your data.

Think about the following questions:

How complex are my classes or features?

Example: Binary classification is simple. Multi-label classification needs more data.

How independent are the labels to each other?

Are there overlap or do the features depend on each other? If there is an overlap, the model will struggle and will need more data.

Example: Black or white is simple. 50 shades of grey need more data.

Think about the following questions:

What percentage of the total population is represented in this data?
Beware of biased data.

Do you need to add or remove data? The platform gives you some tools to handle this.

If your data is old, you might need to ask yourself if the data is still relevant?

Does the data cover, what the model will see in the real world?
Your data might work really well in the lab, but does it reflect the real world?

Example: You’ve collected training data in Germany and want to use the trained model in Brazil.

Screenshot 2021-12-27 at 13.58.13.png

It’s only in prepared datasets that you have exactly the same amount of samples from each label. The real world is never exactly the same.

Think about the following questions:

Are all classes well represented in this data?

Do you have enough samples for each class?

Example: If your dataset consists of 1000 rows but all of them have the same label, then the model will fail, and you won’t succeed. The platform gives you some tools to handle this, e.g., you can use class weights on the Target block.

Screenshot 2021-12-27 at 14.04.36.png

How much data do I need? It depends!
Maybe you should ask yourself; How much data do I need to do What? is a collaborative end-to-end environment for developing, managing, and deploying AI models at scale. It’s got everything you need to execute an AI project from start to finish in a single software platform.

The platform workflow consists of 5 views (or screens if you prefer), each encapsulating a specific step in the model development process.
The different views map to the AI project workflow, where you start by defining your business problem, continue with diving into your data to understand and prepare it. After that, you build your model and then train it and evaluate it. When you're satisfied, you can deploy your model.


The view for data understanding and data preparation. The dataset view allows you to edit and inspect the data that your project can use.

In this view you can:

Import datasets into the platform, from your local drive, or from a URL. Make sure that the Dataset is structured according to our requirements.

Manage versions of your datasets.

Edit a dataset to better fit your model.

Manage the Data API to enable dataset uploads from your own applications.

The modeling view lets you build a deep learning model yourself or wirk with a colleague on the same model. There are no limits how you can build the model- it's as easy to make a simple one as to make a complex, multi-layered one.

In the evaluation view you can see in real-time how the model is performing as it's learning from the data-not learning as well as expected? Then pause the experiment, go back to the Datasets view or Modeling view, make some tweaks and get running again, quick iteration style.

The deployment solution provides the means to quickly test out model prototypes all the way directly in your services. it also provides stability and the scalability you need for a system that will be deployed for longer periods of time with a reliable model for server-to-server integration.

bottom of page