About us
ALGOTEQUE is an IT consultancy firm that helps startups, mid-sized and large corporations to create and deliver innovative technologies.
Our team has a successful track record in designing, developing, implementing, and integrating software solutions (AI, ML, BI, Web, Automation) for Telecom, Energy, Bank, Insurance, Pharma, Automotive, Industry, e-commerce. We deliver our services both in fixed-price and time-and-materials models, helping our customers achieve their business and IT strategies.
Job Description
We are seeking a skilled Data Engineer with at least 3 years of experience to join our team and play a key role in building and optimizing data pipelines to ensure seamless data flow across our organization. In this role, you'll use your data engineering expertise to manage data ingestion processes, maintain robust data repositories, and work within our core data stack, including Databricks, Redshift, DIFW, Informatica, and DBT.
Key Responsibilities:
Design, implement, and maintain scalable data ingestion and data engineering pipelines to streamline data collection, processing, and storage.
Develop and manage DIFW and DBT pipelines for efficient data processing and transformation.
Optimize and oversee data warehousing solutions to support high-performance data analytics and reporting.
Collaborate with cross-functional teams to ensure data integrity, quality, and consistency across the organization.
Maintain Databricks and Redshift repositories along with other core tools such as DIFW, Informatica, and DBT to facilitate data accessibility and usability.
Identify, design, and implement internal process improvements, including automating manual processes and optimizing data delivery.
Troubleshoot, debug, and enhance data pipelines and systems to support data initiatives.
Required qualifications
At least 3 years of experience in Data Engineering, with a strong understanding of data ingestion and data pipeline architectures.
Proficiency in data warehousing technologies and big data solutions, especially Databricks and Amazon Redshift.
Hands-on experience with DIFW pipelines, DBT (data build tool), and Informatica.
Strong SQL skills and experience with data transformation tools.
Excellent teamwork and communication skills, with the ability to convey technical concepts to non-technical stakeholders.
A proactive approach to problem-solving with a commitment to continuously improving data infrastructure and processes.
Nice to Have:
Experience with cloud platforms such as AWS or Azure.
Familiarity with ETL/ELT concepts and experience developing data models for analytics.
If you are passionate about building high-quality data solutions and enjoy working with cutting-edge technologies, we'd love to hear from you!
Apply now to join our team and be a part of our data-driven journey!