Skip to main content

13114 Database Engineer

Our client in Alpharetta, Georgia is seeking an experienced Database Engineer who is passionate about designing, optimizing, and scaling large data platforms. This individual will play a critical role across all aspects of the data ecosystem — ensuring optimal implementation, configuration, maintenance, and performance of mission-critical systems.

This role has a heavy focus on Databricks as the primary platform, along with extensive hands-on development using Python and PySpark to process, transform, and manage large, time-sensitive financial datasets.

You will join a talented engineering team in a fast-paced, agile environment, helping to develop a groundbreaking, multi-tenant, cloud-based payments platform. We’re looking for a motivated, independent engineer with strong communication skills who wants to see their work make a meaningful impact in the industry.

The ideal candidate brings deep expertise in SQL Server within an Azure PaaS environment and can provide technical leadership across product and engineering teams.

Key Responsibilities

  • Serve as a member of a small database engineering team supporting company-wide data initiatives and operational excellence
  • Design, write, and optimize complex SQL code (stored procedures, functions, tables, views, triggers, etc.)
  • Lead development and optimization efforts within Databricks as the primary data platform
  • Develop, manage, troubleshoot, and enhance complex Python and PySpark codebases that process and securely store large-scale financial datasets
  • Develop and execute long-term strategies to support database performance, capacity, reliability, and scalability
  • Research and implement data solutions aligned with evolving product requirements
  • Troubleshoot, identify, and resolve database and infrastructure-related issues
  • Work with additional SQL platforms such as PostgreSQL
  • Build and support IaaS, PaaS, and SaaS-based data platforms
  • Implement and manage data retention, access, and security policies
  • Perform data migration, replication, user administration, backup, and recovery
  • Collaborate using source control tools such as GitHub, Bitbucket, or similar

Qualifications

  • Bachelor’s degree in Computer Engineering, Computer Science, or related field (or equivalent experience)
  • 5+ years of intensive experience with MS SQL Server and T-SQL
  • 3+ years of IT operations experience building enterprise database solutions
  • Strong hands-on experience with Databricks in Azure (required)
  • Extensive Python and PySpark development experience, including building, managing, troubleshooting, and debugging complex systems that parse, process, and securely store very large, time-sensitive financial datasets
  • Strong verbal and written communication skills
  • Self-starter with proactive work ethic
  • Exceptional organizational and time management skills

Preferred Qualifications

  • Experience designing and managing complex, multi-platform ETL pipelines
  • In-depth knowledge of Parquet files, Delta Tables, and Medallion Architecture
  • Experience with the Atlassian stack (JIRA, Confluence, Bitbucket)
  • Experience with CI/CD and deployment automation tools such as Azure DevOps or TeamCity
  • Experience with Microsoft Azure cloud services
  • Experience with Microsoft Fabric, Power BI, or similar analytics tools
  • Experience working in PCI- and SOC 2-compliant environments
  • Experience with AI-assisted coding tools (e.g., Claude-Code or similar)
  • Prior financial services or payments industry experience
Job Category: IT
Job Location: Alpharetta GA

Apply for this position

Allowed Type(s): .pdf, .doc, .docx