Senior Software Engineer, Data Backend(CrossX)

19 Hours ago • 3 Years + • Full Stack Development

About the job

SummaryBy Outscal

Must have:
  • Python, RESTful APIs
  • Data Warehouses (Trino/Presto, Pinot)
  • Data Pipelines (Airflow, Spark)
  • Kubernetes
  • AWS/GCP
  • Big Data Platforms
Good to have:
  • Open Source Contributions
  • Scala/Java
  • Hadoop, Hive, Flink
Not hearing back from companies?
Unlock the secrets to a successful job application and accelerate your journey to your next opportunity.

About Appier 

Appier is a software-as-a-service (SaaS) company that uses artificial intelligence (AI) to power business decision-making. Founded in 2012 with a vision of democratizing AI, Appier’s mission is turning AI into ROI by making software intelligent. Appier now has 17 offices across APAC, Europe and U.S., and is listed on the Tokyo Stock Exchange (Ticker number: 4180). Visit www.appier.com for more information.

 

About the role

Appier’s solutions are powered by proprietary deep learning and machine learning technologies to empower every business to use AI to turn data into business insights and decisions. As a Software Engineer, Data Backend, you will be involved in helping to build critical components of this platform.

 

Responsibilities

  • Design, develop, and maintain RESTful APIs using Python.
  • Build and manage robust data warehouses utilizing Trino/Presto and Pinot.
  • Design and develop data pipelines using Apache Airflow and Apache Spark.
  • Work closely with cross-functional teams to develop automation tools that streamline daily operations.
  • Implement state-of-the-art monitoring and alerting systems to ensure optimal system performance and stability.
  • Address queries from applications in a timely and effective manner, ensuring high client satisfaction.
  • Work on cloud platforms such as AWS and GCP, leveraging their capabilities to optimize data operations.
  • Utilize Kubernetes (k8s) for container orchestration to facilitate efficient deployment and scaling of applications.

 

About you

[Minimum qualifications]

  • BS/MS degree in Computer Science
  • 3+ years of experience in building and operating large-scale distributed systems or applications
  • Experience in Kubernetes development, Linux/Unix
  • Experience in managing data lake or data warehouse
  • Expertise in developing data structures, algorithms on top of Big Data platforms
  • Ability to operate effectively and independently in a dynamic, fluid environment
  • Ability to work in a fast-moving team environment and juggle many tasks and projects
  • Eagerness to change the world in a huge way by being a self-motivated learner and builder

[Preferred qualifications]

  • Contributing to open source projects is a huge plus (please include your github link)
  • Experience working with Python, Scala/Java is a plus
  • Experience with Hadoop, Hive, Flink, Presto/Trino and related big data systems is a plus  
  • Experience with Public Cloud like AWS or GCP is a plus 
View Full Job Description

About The Company

Taipei City, Taiwan (On-Site)

Taipei City, Taiwan (On-Site)

Taipei City, Taiwan (On-Site)

Tokyo, Japan (On-Site)

Tokyo, Japan (On-Site)

Tokyo, Japan (On-Site)

Taipei City, Taiwan (On-Site)

Taipei City, Taiwan (On-Site)

Taipei City, Taiwan (On-Site)

Taipei City, Taiwan (On-Site)

View All Jobs

Level Up Your Career in Game Development!

Transform Your Passion into Profession with Our Comprehensive Courses for Aspiring Game Developers.

Job Common Plug