Horas de trabajo
Full time
Ubicación
Cualquier lugar
Lugar de trabajo
Remoto
Tipo de contrato
Contrato indefinido
What is Cobre and what do we do?
Cobre is Latin America’s leading instant b2b payments platform. We solve the region’s most complex money movement challenges by building advanced financial infrastructure that enables companies to move money faster, safer, and more efficiently.
We enable instant business payments—local or international, direct or via API—all from a single platform.
Built for fintechs, PSPs, banks, and finance teams that demand speed, control, and efficiency. From real-time payments to automated treasury, we turn complex financial processes into simple experiences.
Cobre is the first platform in Colombia to enable companies to pay both banked and unbanked beneficiaries within the same payment cycle and through a single interface.
We are building the enterprise payments infrastructure of Latin America!
What we are looking for:
We are seeking a Sr Data Engineer to join our elite data team at Cobre. This pivotal role sits at the intersection of data technology and financial innovation. You will architect and optimize our data infrastructure, enabling real-time analytics and insights that meet the diverse needs of our clients and business.
This role combines the technical expertise of a cloud architect and DevOps specialist to define, optimize, and maintain enterprise data solutions based on the AWS Well-Architected Framework. While our primary platforms are AWS, Snowflake, and Confluent Cloud, you will stay updated on key technologies and lead migrations or implementations as necessary.
As a key member of Cobre's data team, you will merge into an elite group of tech visionaries. Your role entails dynamic collaboration with leaders across departments, aligning strategies, and formulating a unified, ambitious roadmap for Cobre's data capabilities. This role is crucial in weaving different technological and business threads into a coherent, powerful vision for the company's growth and innovation in data engineering within the fintech industry.
What would you be doing:
Data Platform: Continue the constant evolution of our platform to ensure that the different teams have the necessary tools and infrastructure to build their own solutions and scale them efficiently.
Data Pipelines: Implement, maintain, and optimize the infrastructure required to support real time, event-driven and batch ETL/ELT processes of our platform. Ensure seamless operation of all processes and develop a comprehensive monitoring solution to proactively address potential issues.
Data Warehouse: Maintain, monitor, and enhance our data model across the different stages of our medallion architecture, while also implementing and reinforcing data quality processes. Monitor the cost and usage of our data warehouse to identify suboptimal processes and queries for improvement. Advocate for and promote best practices across teams.
Data Governance: Assist in defining and implementing essential data governance policies and services on our platform to ensure secure scaling in compliance with the highest standards and regulations of the financial industry.
Machine Learning: Act as a bridge to the data science team, enabling the secure, scalable, and efficient productization of their solutions.
Raise the bar: Raise the level of platform users by sharing best practices, conducting monthly training sessions, and providing ongoing support for adoption.
Cross-functional Collaboration: Work closely with product, engineering, and data science teams to ensure that the data model supports and enhances product development and customer experience.
Data Architecture Trends and Market Analysis: Stay at the forefront of data pipelines and data models patterns, representing Cobre in the wider tech community, and integrating best practices into our data strategies.
What do you need:
Experience: Minimum of 5 years in data engineering, focusing on scalable data pipelines and data models. Proven ability to handle, process, and secure large data sets.
Education: Bachelor's degree in Computer Science, Engineering, or a related field.
Data Pipelines and Data Models: Extensive experience in creating, maintaining, and optimizing event-driven, real-time, and batch pipelines, primarily using Python and SQL. Proven ability in designing and implementing scalable data models within data warehouses.
Data Management: Strong knowledge and hands-on experience in data governance processes, including the development and implementation of information access policies, data privacy protocols, information retention strategies, and more.
Cloud Infrastructure: Preferably with experience implementing infrastructure as code, implementation of modules and best practices following the DRY principle. Be able to maintain and improve the infrastructure while keeping it cost effective.
Data Architectures Patterns: Proven experience in designing and implementing scalable, resilient, and cost-efficient architectures for event-driven, real-time, and batch processing pipelines
Relevant Technologies: Wide variety of AWS services included but not limited to DynamoDb, ElasticSearch, MWAA, Lambda, Glue, MSK, Kinesis, SQS, SNS, EventBridge, CloudWatch, S3, SageMaker. Snowflake or Data Warehouse experience with Stored Procedures, Views, Materialized Views, External Tables, Streams, Data Models, File Formats like Parquet, Iceberg. Infrastructure as Code knowledge it's nice to have preferably Terraform and Terragrunt, Github and Github Actions, Python, SQL.
Background in High-Volume Data Management: Proven experience in handling, processing, and securing large sets of data, with a keen understanding of the challenges and solutions in data-intensive environments.
Collaborative Spirit: The ability to work seamlessly across different departments, fostering a collaborative environment that encourages innovation and efficiency.
Industry knowledge: Fintech, specially Payments, experience in LatAm markets is a plus.
Advanced level of english is a must