ETL Developer - big data ( Kafka, Spark, Python)
Experience 3+ years.
Knowledge of at least one ETL tool Kafka, Spark, Python for at least 2 years.
Exposure to big data
Determines data storage requirements.
Builds a data warehouse for the organization's internal departments utilizing various data warehousing ideas.
Creates and improves data solutions that enable smooth data delivery and is in charge of gathering, processing, maintaining, and analyzing enormous amounts of data.
Leads the logical data model design and implementation and the construction and implementation of operational data stores and data marts.
Designs, automate, and supports sophisticated data extraction, transformation, and loading applications.
Ensures the accuracy of data.
For ETL applications, creates logical and physical data flow models.
Data access, transformation, and mobility needs are translated into functional requirements and mapping designs.
SQL competence (query performance tuning, index management, etc.) and a grasp of database structure are required.
Understanding of data modeling concepts.
Knowledge of different SQL/NoSQL data storage techniques and Big Data technologies
Passionate about sophisticated data structures and problem solutions.
Quickly learn new data tools and ideas
Please send your CVs to careers@pinpoint-hr.com
No comments:
Post a Comment