About us
ALGOTEQUE is an IT consultancy firm that helps startups, mid-sized and large corporations to create and deliver innovative technologies.
Our team has a successful track record in designing, developing, implementing, and integrating software solutions (AI, ML, BI, Web, Automation) for Telecom, Energy, Bank, Insurance, Pharma, Automotive, Industry, e-commerce. We deliver our services both in fixed-price and time-and-materials models, helping our customers achieve their business and IT strategies.
Job Description
Are you a skilled Azure Data Engineer looking to join an innovative and dynamic team? We're seeking a passionate professional to become part of our Data team, focused on building and maintaining a scalable data platform that supports our company's data-driven decision-making processes. You will work with the latest Azure tools and have access to cutting-edge technology like ChatGPT and GitHub Copilot to help you succeed.
Your Main Tasks as a Data Engineer:
Analyze Business Requirements: Design and implement end-to-end data solutions while ensuring high standards of data quality and integrity.
Develop Data Pipelines: Build efficient ETL processes for various data sources using tools like Databricks, Data Factory, Functions, Logic Apps, and Stream Analytics.
Enhance DevOps Stack: Work on improving our DevOps tools such as GIT, SonarQube, Azure DevOps, and Artifactory.
Optimize Existing Solutions: Create and implement reusable patterns to enhance platform performance.
Required qualifications
Experience with Python: You have experience working on Python projects, preferably in a financial or regulatory context, and can write clean, efficient, and well-documented code.
Strong SQL Knowledge: Proficiency in T-SQL and Spark SQL.
Version Control: Comfortable using GIT for version control and collaboration.
Bonus Skills:
Knowledge of Spark Engine: Experience with Databricks, Scala, or PowerShell languages.
Azure Cloud Expertise: Hands-on experience working within the Azure cloud environment.
Experience with Data Pipelines: Familiarity with ETL tools and stream processing will be highly advantageous.
Projects You Can Help Us With:
Build an operational data mart based on event-driven architecture.
Create serverless applications for data loads from APIs.
🌍 Location: Hybrid role based in Prague, fluent Czech required.
Ready to take your data engineering skills to the next level? Apply today and become part of our forward-thinking team!