Skip to main content
TrustRadius
Amazon SageMaker

Amazon SageMaker

Overview

What is Amazon SageMaker?

Amazon SageMaker enables developers and data scientists to quickly and easily build, train, and deploy machine learning models at any scale. Amazon SageMaker removes all the barriers that typically slow down developers who want to use machine learning.

Read more
Recent Reviews

AWS - The best!

9 out of 10
May 21, 2018
Incentivized
Amazon SageMaker is currently being used by our analytics and technology groups but managed by the associates at our firm. It addresses …
Continue reading
Read all reviews
Return to navigation

Product Details

What is Amazon SageMaker?

Amazon SageMaker enables developers and data scientists to quickly and easily build, train, and deploy machine learning models at any scale. Amazon SageMaker removes all the barriers that typically slow down developers who want to use machine learning.

Amazon SageMaker Technical Details

Deployment TypesSoftware as a Service (SaaS), Cloud, or Web-Based
Operating SystemsUnspecified
Mobile ApplicationNo
Return to navigation

Comparisons

View all alternatives
Return to navigation

Reviews and Ratings

(48)

Reviews

(1-3 of 3)
Companies can't remove reviews or game the system. Here's why
Score 9 out of 10
Vetted Review
Verified User
Incentivized
Amazon Sagemaker has multiple applications and use cases in our organization. It is used to create machine learning models for our call center team to analyse frequently raised customer problems, widely accepted solutions. These models help in reducing operating cost by automating and optimizing processes with minimal manual intervention. The other usecase include product development which required decision making based on image processing.
  • Machine Learning at scale by deploying huge amount of training data
  • Accelerated data processing for faster outputs and learnings
  • Kubernetes integration for containerized deployments
  • Creating API endpoints for use by technical users
  • The UI can be eased up a bit for use by business analysts and non technical users
  • For huge amount of data pull from legacy solutions, the platform lags a bit
  • Considering ML is an emerging topic and would be used by most of the organizations in future, the pipeline integrations can be optimized
Amazon Sagemaker suits well in areas of data science and Machine learnings where medium to high-volume data is to be used for analysis.
For a lean and platform agnostic deployment, it provides kubernetes integration to containerize the solution and deploy on any platform.
It is one of the best solution for technical users for training Machine Learning models.
  • Studio Lab
  • Model Training
  • Pipelines
  • Kubernetes Integration
  • Machine Learning models help in reducing operating cost for manual intensive processes by deploying chatbots
  • Improvement in product roadmap for learning about customer feedback on an early stage
  • Supporting analytics and data science team to share correct insights and models with business teams
Score 10 out of 10
Vetted Review
Verified User
Incentivized
We are using the SageMaker service from AWS for POC, and to build the final model on the large dataset of healthcare domain under the R&D department. SageMaker also provides hosting functionality, so that we can host a created model for the end-level application which is accessible through a simple API call from any application.
  • Provided an instance of Jupyter notebook for development script, which made it very easy to manage and develop any script.
  • Our system is cloud-based, and we are charged only for what we use and how long we use it.
  • We can choose multiple servers for Training, without any headache of distribution.
  • Most of the libraries are supported.
  • All training, testing, and models are stored on S3, so it's very easy to access whenever we require it.
  • It's very good for the hardcore programmer, but a little bit complex for a data scientist or new hire who does not have a strong programming background.
  • Most of the popular library and ML frameworks are there, but we still have to depend on them for new releases.
Well suited scenarios:
  • For quick POC of ML and DL.
  • To train a model on a large dataset using multiple servers.
  • To host a model to be used by multiple applications.
Less appropriate scenarios:
  • For data analysis tasks.
  • For a data scientist who has less of a programming background.
  • Using SageMaker, we can truly implement 'fail early, learn fast,' using an on-demand server for training.
  • It also saves your money from investing in a physical server for very rare use.
  • However, the pricing is high, but it will cost you only for what you use.
Amazon SageMaker comes with other supportive services like S3, SQS, and a vast variety of servers on EC2. It's very comfortable to manage the process and also support the end application by one click hosting option. Also, it charges on the base of what you use and how long you use it, so it becomes less costly compared to others.
Amazon Simple Queue Service (SQS), Amazon Redshift, Amazon S3 (Simple Storage Service), Amazon Elastic Compute Cloud (EC2)
Gavin Hackeling | TrustRadius Reviewer
Score 7 out of 10
Vetted Review
Verified User
Incentivized
We use SageMaker in the engineering and data science departments to host Jupyter notebooks, periodically retrain models, and serve models in production. Data scientists work in Jupyter notebooks hosted on SageMaker notebook instances instead of their local machines. We often inject models into AWS-provided containers, and use SageMaker to provide a managed, auto-scaling HTTP interface.
  • SageMaker is useful as a managed Jupyter notebook server. Using the notebook instances' IAM roles to grant access to private S3 buckets and other AWS resources is great. Using SageMaker's lifecycle scripts and AWS Secrets Manager to inject connection strings and other secrets is great.
  • SageMaker is good at serving models. The interface it provides is often clunky, but a managed, auto-scaling model server is powerful.
  • SageMaker is opinionated about versioning machine learning models and useful if you agree with its opinions.
  • SageMaker does not allow you to schedule training jobs.
  • SageMaker does not provide a mechanism for easily tracking metrics logged during training.
  • We often fit feature extraction and model pipelines. We can inject the model artifacts into AWS-provided containers, but we cannot inject the feature extractors. We could provide our own container to SageMaker instead, but this is tantamount to serving the model ourselves.
SageMaker is great for serving Jupyter notebooks, particularly if you already use other AWS products, such as S3. SageMaker's model retraining function is useful if you write a few Lambda functions to invoke jobs. Its model serving function is useful if your team has limited resources and is willing to submit to SageMaker's opinions.
  • We have been able to deliver data products more rapidly because we spend less time building data pipelines and model servers.
  • We can prototype more rapidly because it is easy to configure notebooks to access AWS resources.
  • For our use-cases, serving models is less expensive with SageMaker than bespoke servers.
Return to navigation