Data Engineer - Machine Learning
With more than 2,600 locations nationwide, Public Storage is the leader in the self-storage industry, and given our number of tenants, we may very well be the world's largest landlord. We've experienced unprecedented growth over the past four decades, and it's in no small part due to the dedicated team that has helped us become an S&P 500 industry leader, the country's largest real estate investment trust (REIT), and the most recognizable name in self-storage.
We are currently looking for an Data Engineer to join our Machine Learning Data & Analytics practice in Glendale, CA. The Data Engineer would help us build data products supporting analytics use cases by expanding and optimizing our data warehouse and data pipeline architecture. The ideal candidate is an experienced data pipeline builder with cloud experience who enjoys optimizing data systems and building them from the ground up through collaboration with business stakeholders and internal Data Analytics team members. This role will have significant ownership in the design and implementation of the future analytics data warehouse for Public Storage.
- Maintain our data warehouse with timely and quality data
- Build and maintain data pipelines from internal databases and APIs
- Create and maintain architecture and systems documentation
- Drive initiatives around architecture design and implementation
- Plan and execute system expansion as needed to support the company's growth and analytic needs
- Collaborate with Data Engineers and Data Scientists to drive efficiencies for their work
- Collaborate with other functions to ensure data needs are addressed
- Improve, manage, and teach standards for code maintainability and performance in code submitted and reviewed
- Bachelors/Masters in STEM fields such as computer science, math, physics, or business with a strong technical acumen required
- 3+ years hands-on experience deploying production quality code
- Demonstrably deep understanding of SQL and analytical data warehouses - we use BigQuery
- Strong data modeling skills and familiarity with the Kimball methodology.
- Hands-on experience implementing ETL (or ELT) best practices at scale
- Hands-on experience with data pipeline tools (Airflow, Luigi, Azkaban, dbt) - we use Airflow and dbt
- Professional experience using Python for data processing
- Knowledge of and experience with data-related Python packages
- Experience with software engineering best practices like version control and using Git
- Experience with cloud environments e.g. AWS/GCP is a plus
- Desire to continually keep up with advancements in engineering practices
All your information will be kept confidential according to EEO guidelines.