Overview
Public Storage is the world’s best owner and operator of self-storage facilities, serving millions of customers across 3,000+ locations. Public Storage’s Data and AI organization operates like a high-velocity startup inside the enterprise—modern cloud stack, rapid iteration, small expert teams, and direct impact on revenue-critical decisions every day. Our platform is built on Google Cloud (BigQuery, Vertex AI, Pub/Sub, DataFlow, Cloud Run, GKE/Terraform), dbt cloud, Airflow/Cloud Composer, and modern CI/CD practices. We build solutions that driver significant business impact across both digital and physical. Engineers on our team work end-to-end: designing systems, shipping production workloads, influencing architecture, and shaping how AI is applied at national scale.
We build for both short and long-term – we are a dynamic, high-velocity engineering team that moves quickly from idea to production. This is a role for someone who wants to own key parts of the data & ML platform, make immediate impact, and thrive in an environment where requirements evolve, decisions matter, and results are visible.
You are a passionate, full-stack data & ML engineer who loves to write code, build systems, fun and spirited debates about the “right” architecture for the specific use case. In addition to tech skills, we believe in teaching the soft leadership skills you need to advance your career over the long term.
Data Engineering & Pipeline Development (Primary) (60%)
-
Architect, build and maintain batch and streaming pipelines using BigQuery, dbt, Airflow/Cloud Composer, and Pub/Sub
-
Define and implement layered data models, semantic layers, and modular pipelines that scale as use-cases evolve
-
Establish and enforce data-quality, observability, lineage, and schema governance practices
-
Drive efficient BigQuery design (clustering, partitioning, cost-awareness) for structured tabluar data primarily and unstructured data (web logs, call center transcripts, images/videos etc) when the use case requires it.
-
Leverage ML/DS capabilities in BQML for anomaly detection and disposition
-
You will be accountable for delivering reliable, performant pipelines that enable downstream ML and analytics
ML/AI Platform Engineering (20%)
-
Transform prototype notebooks / models into production-grade, versioned, testable Python packages
-
Deploy and manage training and inference workflows on GCP (Cloud Run, GKE, Vertex AI) with CI/CD, version tracking, rollback capabilities
-
Evaluate new products from GCP or vendors; build internal toolkits, shared libraries and pipeline templates that accelerate delivery across teams
-
You will enable the ML team to ship faster with fewer failure-modes
Applied AI & Real-Time Decisioning (20%)
-
Support real-time, event‐driven inference and streaming feature delivery for mission-critical decisions such as but not limited to real time recommendation systems, dynamice A/B testing and agentic AI interfacing
-
Contribute to internal LLM-based assistants, retrieval-augmented decision models, and automation agents as the platform evolves
-
Implement model monitoring, drift detection, alerting, and performance tracking frameworks
Cross-Functional Collaboration
-
Partner with data scientists and engineers to operationalize models, semantic layers and pipelines into maintainable production systems
-
Work with pricing, digital product, analytics, and business teams to stage rollouts, support experiments and define metric-driven success
-
Participate in architecture reviews, mentor engineers, and drive technical trade-offs with clarity