Cloud architect leading teams to implement, deploy, and scale services using cloud native development practices.
- Experience
-
Latitude AI Senior Staff Software Engineer - Pittsburgh, PA
February 2023 - Present
- Lead cross organizational teams for L3 self driving workloads and services, including: data ingest, mapping, labeling, and machine learning infrastructure
- Designed self service platform to enable engineers to develop quickly with structured infrastructure on Kubernetes with cloud resources
Argo AI Senior Staff Software Engineer - Pittsburgh, PA
January 2022 - February 2023
- Lead organizational teams to apply cloud native practices at scale for a variety of workloads and services
- Thought leader with both technical and non-technical teams for cloud vision and roadmaps
- Scale compliance (ISO 27001, CSA Star, GDPR) and engineering practices with generated code/infrastructure
- Manage technical relationships with cloud providers for contracts, product features, and team adoption of features
Argo AI Staff Software Engineer - Pittsburgh, PA
January 2020 - December 2021
- Expanded cloud resources to enable all engineers with full self service cloud infrastructure using GitOps
- Integration with AID acquisition and define architecture for platform compliance for GDPR with legal and product teams
- Lead contract renewals from a technical perspective, collaborate with finance/partnerships for cloud use and growth
Argo AI Senior Software Engineer - Pittsburgh, PA
May 2018 - December 2019
- Established Cloud Platform team charter and core mission for ownership and shared responsibilities with partner teams
- Bootstrapped the initial use Kubernetes and GitOps adoption by offboard service teams
- Defined foundation for multi cloud presence on AWS and GCP using enterprise organizational practices
Tesla Staff Software Engineer - Fremont, CA
November 2017 - March 2018
- Facilitate the adoption of common engineering practices across Tesla departments for CI/CD
- Automation for deploying Kubernetes with shared monitoring, logging, alerting, and access controls
- Leverage and build on open source technologies for highly available, scalable, and secure system
IBM Staff Software Engineer, Box Relay - San Jose, CA
February 2016 - November 2017
- Architected CD pipeline and lead the team to minimize release time from monthly to weekly
- Lead the design of microservices using GraphQL, Kafka, Spring Boot, and Cassandra
- Coordinated with executives and managers to define agile practices, tools, and vision for engineering teams
- Lead daily scrums with engineers to resolve issues and prioritize work items
IBM Staff Software Engineer, Automation Lead - Dublin, OH
July 2015 - January 2016
- Simplified product portfolio to optimize team workflows using automation and agile practices
- Unified legacy products and acquisitions to streamline end-to-end deployment with worldwide IBM teams
- Designed the architecture for product automation, assigned team roles, defined SLAs, and redundancy
- Mobile test automation for iOS and Android devices using Cucumber, Calabash, and Appium
IBM Software Engineer, Team Lead - Dublin, OH
January 2013 - June 2015
- Designed and developed automated deployment and tests for supported platforms (Linux, Unix, Windows)
- Lead worldwide performance teams to regularly release Atlas, Filenet, and StoredIQ products
- Assisted enterprise customers in optimizing performance issues for unique product configurations
- Explored and prototyped emerging best practices/tools for automated deployments/tests before gaining wider adoption
- Education
-
Rochester Institute of Technology - Rochester, NY
Master of Science in Computer Science
- Concentration in Theoretical Computer Science
- Thesis: Solving Satisfiability with Molecular Algorithms
Penn State Erie, The Behrend College - Erie, PA
Bachelor of Science in Computer Engineering
- Skills
-
Cloud
Certified professional in AWS and GCP
Languages
Python, Go, C/C++, Java/Kotlin
Databases
PostgreSQL, Redis
Infrastructure
Kubernetes, Terraform, Helm, Vault, Istio
Open technologies
OpenAPI, gRPC, GraphQL, Kafka
- Patents
-
Methods and systems for secure machine learning are disclosed. The methods include, by a processor: receiving a labeled dataset for use in training the machine learning model, and transmitting a first cluster of training data selected from the labeled dataset to a training device. The first cluster includes less than a threshold amount of data that is determined to prevent the training device from deriving information about the labeled dataset from the first cluster. The methods further include receiving a trained machine learning model from the training device, evaluating the trained machine learning to determine whether the trained machine learning model satisfies an evaluation criterion, and encrypting the trained machine learning model if the trained machine learning model satisfies the evaluation criterion. The encrypted machine learning model can then be deployed to make predictions using live data.
A method, system and computer program product for evaluating capacity needs of a customer. A selection of an industry workflow is received from the customer, or alternatively, the customer provides a custom workflow. Furthermore, the initial workload estimates and the capacity requirements for implementing the selected industry workflow or the custom workflow are determined based on the answers to a set of questions presented to the customer. A model is then created to represent a data flow of the selected/custom workflow as well as represent the transaction rates. A system is provisioned to simulate production usage based on the determined capacity requirements. Furthermore, a workload is simulated based on the model and the initial workload estimates. The usage patterns in the simulated workload and the usage of the system are monitored which are used to update the capacity requirements of the system.
For storing data in computer readable storage devices, a policy table is provided that is configured to define respective retention period policies for respective items of the data according to geolocation origins of the respective items of the data. First data is received from a first computer system and is stored by a computer system hosting a first computer readable storage device. A first data entry is generated in an audit table. This includes generating a timestamp indicating when the first data was received and the geolocation of the first computer system from which the first data is received. A retention period is assigned for the first data according to a retention period indicated in the policy table for the geolocation origin in the first data entry.