Sr. Data Engineer

US-WA-Seattle
4 months ago
Job ID
537761

Job Description

Our mission in Softlines is to be the number one destination for customers to purchase Softlines products and serve the breadth of global customers unlike any other retailer.

As an Amazon.com Senior Data Engineer,
  • You will be working in one of the world's largest and most complex data warehouse environments.
  • You should be an expert in the architecture of DW solutions for the Enterprise using multiple platforms (RDBMS, Columnar, Cloud).
  • You should excel in the design, creation, management, and business use of extremely large datasets.
  • You should have excellent business and communication skills to be able to work with business owners to develop and define key business questions, and to build data sets that answer those questions.
  • Above all you should be passionate about working with huge data sets and someone who loves to bring datasets together to answer business questions and drive change.

As a Senior Data Engineer on the Softlines team,
  • You will also be assisting with integrating the Redshift platform as our primary processing platform.
  • You will be part of a team building the next generation data warehouse platform and to drive the adoption of new technologies and new practices in existing implementations.
  • You will be responsible for designing and implementing the complex ETL pipelines in data warehouse platform and other BI solutions to support the rapidly growing and dynamic business demand for data, and use it to deliver the data as service which will have an immediate influence on day-to-day decision making at Amazon.com.
  • You will be leading and mentoring junior engineers and leading communications with management and other teams.
  • You will be building and migrating the complex ETL pipelines from Oracle system to Redshift and Elastic Map Reduce to make the system grow elastically
  • You will be optimizing the performance of business-critical queries and dealing with ETL job related issues
  • You will be tuning application and query performance using Unix profiling tools, python and SQL
  • You will be extracting and combining data from various heterogeneous data sources
  • You will be designing, implementing and supporting a platform that can provide ad-hoc access to large datasets
  • You will be modelling data and metadata to support ad-hoc and pre-built reporting

Basic Qualifications

  • A desire to work in a collaborative, intellectually curious environment.
  • Degree in Computer Science, Engineering, Mathematics, or a related field or 7+ years industry experience
  • Demonstrated strength in data modeling, ETL development, and data warehousing.
  • Data Warehousing Experience with Oracle, Redshift, Teradata, etc.

Preferred Qualifications

  • Industry experience as a Data Engineer or related specialty (e.g., Software Engineer, Business Intelligence Engineer, Data Scientist) with a track record of manipulating, processing, and extracting value from large datasets.
  • Coding proficiency in at least one modern programming language (Python, Ruby, Java, etc)
  • Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets
  • Experience building data products incrementally and integrating and managing datasets from multiple sources
  • Query performance tuning skills using Unix profiling tools and SQL
  • Experience leading large-scale data warehousing and analytics projects, including using AWS technologies – Redshift, S3, EC2, Data-pipeline and other big data technologies
  • Experience providing technical leadership and mentor other engineers for the best practices on the data engineering space
  • Experience with Big Data Technologies (Hadoop, Hive, Hbase, Pig, Spark, etc.)
  • Linux/UNIX including to process large data sets.
  • Experience with AWS
Sorry the Share function is not working properly at this moment. Please refresh the page and try again later.
Share on your newsfeed