Job summary
As a Knowledge Graph Engineer, you will:
- Develop pipelines and code to support the ingress and egress of this data to and from the knowledge graphs.
- Perform basic and advanced graph querying and data modeling on the knowledge graphs that lie at the heart of the organization's Product Creation ecosystem.
- Maintain the (ETL) pipelines, code and Knowledge Graph to stay scalable, resilient and performant in line with customer’s requirements.
- Work in an international and Agile DevOps environment.
This position offers opportunity to work in a globally distributed team where you will get a unique opportunity of personal development in a multi-cultural environment. You will also get a challenging environment to develop expertise in the technologies useful in the industry.
Primary responsibilities:
- Translate requirements of business functions into “Graph-Thinking”.
- Build and maintain graphs and related applications from data and information, using latest graph technologies to leverage high value use cases.
- Support and manage graph databases.
- Integrate graph data from various sources – internal and external.
- Extract data from various sources, including databases, APIs, and flat files.
- Load data into target systems, such as data warehouses and data lakes.
- Develop code to move data (ETL) from the enterprise platform applications into the enterprise knowledge graphs.
- Optimize ETL processes for performance and scalability.
- Collaborate with data engineers, data scientists and other stakeholders to model the graph environment to best represent the data coming from the multiple enterprise systems.
Skills / Experience:
Must have:
- Experience in designing and implementing graph data models that capture complex relationships, ensuring efficient querying and traversal.
- Strong proficiency and hands-on experience in programming (e.g. Python, Java).
- Practical work experience in the development of ontologies and methodologies (i.e. LPG/RDF), ideally in combination with complex data schemes and data modelling
- Sound understanding and experience with use of graph databases, graph algorithms and data integration at large scale in complex business networks.
- Experience with graph database query languages (e.g. SPARQL), graph algorithms, graph to SQL mapping, graph-based machine learning.
- Experience in data integration, ETL processes, ETL tools and frameworks (e.g., Apache Spark, Apache Airflow) and linked data applications using tools like Databricks, Dydra.
- Required proficiency in graph databases (e.g., Dydra, Amazon Neptune, Neo4j)
Nice to have:
- Experience with AWS infrastructure (S3, CFTs, EC2), security and data.
- Experience with AI pipeline technologies (RDS/Postgres, Snowflake, Airflow) and/or practical experience.
- Familiarity with data warehousing and data modeling best practices
- Understanding of data visualization tools (e.g., Tableau, Power BI)
Education & Personal skillsets:
- A master’s or bachelor’s degree in the field of computer science, mathematics, electronics engineering or related discipline with at least 2 years of experience in a similar role
- Excellent problem-solving and analytical skills
- A growth mindset with a curiosity to learn and improve.
- Team player with strong interpersonal, written, and verbal communication skills.
- Business consulting and technical consulting skills.
- An entrepreneurial spirit and the ability to foster a positive and energized culture.
- You can demonstrate fluent communication skills in English (spoken and written).
- Experience working in Agile (Scrum knowledge appreciated) with a DevOps mindset.
More information about NXP in India...
#LI-29f4