At Pagos, we’re passionate about empowering businesses to take control of their payments stack and solve the puzzles standing between them and optimized growth. Our global platform provides developers, product teams, and payments leaders with both a deeper understanding of their payments data and access to new payments technology through user-friendly tools that are easy to implement. To succeed in this, we need creative thinkers who are willing to roll up their sleeves and start building alongside us.
In this role, you’ll play a key part in building out the core foundation of machine learning at Pagos. We're looking for someone dedicated to continuously learning and staying aligned with developments in machine learning engineering and best practices. You'll have the opportunity to apply emerging innovations and solutions to provide best-in-class solutions to our clients.
You will partner with Data Science and other teams across the Product org to build products that delight our customers.
As a Machine Learning Engineer, you will:
Build, launch, and maintain models that solve complex business problems
Partner with engineers and product leaders across Pagos
Create ML models for real time solutions that allow companies to incorporate data-driven decisions into the payment stack to save money and grow revenue - including anomaly detection, recommendation, and prediction use cases
We’re looking for someone with:
5+ years of professional experience as a Machine Learning Engineer, Data Scientist, and/or Data Engineer
Strong experience with data analysis, feature exploration, ETL, and visualization
Highly proficient with SQL
Strong programming skills in Python, including exposure to the python ML ecosystem (Numpy, pandas, statsmodels, sklearn, TensorFlow/PyTorch, etc.)
Experience with model versioning and lifecycle management (mlflow), CI/CD environment (such as Jenkins or Buildkite), version control (Git), job orchestration (e.g. Airflow), and artifact management such as PyPi (nice to have)
Experience in deploying machine-learning models as scalable services
Familiarity with parallel processing databases (Snowflake or Redshift, etc) and large-scale data processing and distributed systems (Spark or Dask, etc.)
Proven experience shipping high quality, well-documented code
Nice to haves:
Experience working in high growth, venture-backed startup(s)
Experience with data visualization tools, such as D3.js, GGplot, matplotlib etc.
Prior leadership experience
© 2020 RemoteJobs.store. Built using NextJS and Vercel.
Uses RemoteOK and Remotive APIs.