LinkedIn and 3rd parties use essential and non-essential cookies to provide, secure, analyze and improve our Services, and to show you relevant ads (including professional and job ads) on and off LinkedIn. Learn more in our Cookie Policy.
Select Accept to consent or Reject to decline non-essential cookies for this use. You can update your choices at any time in your settings.
Tekvaly is looking for a Data Engineer in the USA for its client. If you are passionate about moving data and businesses, then don't wait and apply away!
Who We are
Tekvaly is a diversified global staffing company that gives both offshore and onshore technical solutions to business enterprises. Our mission is to enable superior returns on clients’ technology investments through best-in-class industry solutions, domain expertise and global scale. We feel deeply connected to our customers, and therefore our success isn’t just a matter of our bottom line, but a reflection of how our customers flourish, and how their communities thrive. We strive to understand our customers’ individual needs so that we can develop products and services that enhance their livelihoods. Our customers are our partners, and when we rise, we rise together.
As a Data Engineer, you will power the future of data by building scalable pipelines, optimizing cloud warehouses, and transforming raw data into actionable insights.
Responsibilities:
Design, build, and maintain scalable ETL pipelines and data workflows using Apache Airflow, Spark, or similar orchestration tools.
Manage data warehouses and data lakes (Snowflake, BigQuery, Redshift) to support high-performance analytics and reporting.
Perform data extraction, transformation, and loading from diverse sources including databases, APIs, and streaming systems such as Kafka.
Develop and maintain both batch and real-time data pipelines to support analytical and operational use cases.
Optimize SQL queries and data models for performance at scale by implementing partitioning, indexing, and caching strategies.
Implement data quality checks, automated data validation, and pipeline monitoring to ensure reliability, accuracy, and SLA compliance.
Maintain logging and alerting mechanisms to proactively detect pipeline failures.
Deploy and manage data infrastructure using Docker, Kubernetes, and cloud platforms (AWS, GCP, Azure), and manage infrastructure through Infrastructure-as-Code tools such as Terraform.
Support dashboard development by providing optimized datasets for Power BI, Tableau, or Looker.
Collaborate with data analysts, data scientists, and business stakeholders to deliver clean, reliable, and well-structured datasets for analytics and reporting.
Requirements:
Hands-on experience in data engineering with strong SQL proficiency.
Hands-on expertise with Python (Pandas, PySpark) and ETL/orchestration tools such as Airflow.
Experience with cloud data platforms such as Snowflake, BigQuery, or Databricks.
Familiarity with containerization (Docker) and CI/CD practices for data pipelines.
Knowledge of streaming technologies such as Kafka or Kinesis for real-time data processing.
Understanding of data modeling concepts including star schema and dimensional modeling.
Experience implementing data quality checks and testing frameworks for reliable pipelines.
Relevant cloud or data engineering certifications (AWS, GCP, Azure, Databricks, or Snowflake) are a plus.
Bachelor’s degree in Computer Science, Data Analytics, or related field, or equivalent work experience.
Soft Skills We Like to See:
Excellent Communication skills.
Adaptability and willingness to learn.
Problem-solving mindset.
Analytical skills.
Ability to work in a team environment and collaborate effectively with others.