Discover mlops

a powerful and versatile platform enabling machine learning operations at scale

1. Discover data

Easy

Discover any datasource available on your AWS account. The moment you add it, we start keeping track on schema changes and new incoming data, making versioning as easy as it should have been from the start.

Secure

All data tracked by MLOps remains on your account, encrypted both at rest and in transit.

Accessable

All datasources can be shared in a individual/team/organization hierarchy, making it easy to control access.

2. Create datasets

Opinionated support wheels

We value the engineering craft. But let's face it, some things you just don't want to configure. We provide a small SDK to filter out read/write strategies as well how to split and version your transformations.

The frameworks you know

Datasets are created using PySpark or pure Python with Scikit and Pandas when experimenting, or the dataset is small enough.

Versioning

We version both your scripts and the output of them. Creating tight links for the whole dataset pipeline. This also allows you to compare datasets, both by source code and by output. It could not be easier to take the best trix from your colleagues and apply it to your own pipeline

3. Train models

Start small, finish big

With MLOps you can train your model on fractions of the dataset, until you are confident enough your algorithm kicks ass. Choose to train on a single four core machine, or a 100 node GPU cluster.


Use any framework

MLOps support most common frameworks like TensorFlow, PyTorch and Scikit-Learn. Whatever your preference, you can compare models against common metrics.


Collaborate and learn

As your team sets the star metric for a project, it is easy to view and review your colleagues work as you strive for perfection. Everything from parameters to source code for the pipeline is readily available for deep comparisons.

4. Go live

Live endpoints

Provision an endpoint with any type of compute power for real-time inference. We bundle the preprocessing with the model so that the raw data can be sent straight through for inference. Monitor the performance from the dashboard.


Batch jobs

For inference that needs to be run on a schedule, you can create any cron compatible trigger for your pipeline. Choose to output to S3 or DynamoDB.


Serverless

The time of server management and paying for time not utilized is over. We offer serverless inference for all models smaller than 4 GB. Always bundled with the preprocessing and monitored 24/7.

Great machine learning starts here