About Appier
Appier is a software-as-a-service (SaaS) company that uses artificial intelligence (AI) to power business decision-making. Founded in 2012 with a vision of democratizing AI, Appier’s mission is turning AI into ROI by making software intelligent. Appier now has 17 offices across APAC, Europe and U.S., and is listed on the Tokyo Stock Exchange (Ticker number: 4180). Visit www.appier.com for more information.
About the role
Appier's mission is to make AI easier for everyone to use. To achieve this, we build AI systems to connect heterogeneous data in the cross-screen era, to process information efficiently while achieving our goals, and to solve new and exciting real-world problems. Are you passionate about AI, and looking to be part of this flourishing AI age? Do you want to witness how different types of data play a key role in concurrent AI systems and contribute to improving the capability of these systems? Do you want to build incredible AI systems and watch them grow to meet our product needs? If yes, we'd love to talk to you.
Responsibilities
- Build a flexible ML platform to speed up the development process of AI models.
- Develop next-generation AI backend systems related to large-scale data processing, real-time feature generation, evaluation/performance dashboard.
- Establish and maintain MLops processes and tools, including model deployment, monitoring, and automation.
About you
[Minimum qualifications]
- BS/BA degree in Computer Science or related field.
- Know basic software testing (able to write unit test for algorithms)
- Experience in Unix/Linux environments.
- Strong problem-solving skills and passion for learning new technologies.
- Great communication skills to work side-by-side with scientists, and collaborate with engineers, product managers and other teams.
[Preferred qualifications]
- Familiarity with the machine learning flow of building a data-driven AI system.
- Experience to use and extend ML platform like Kubeflow, MLflow, Apache Submarine
- Familiarity with cloud native ecosystems like k8s, helm, prometheus and argo.
- Experience with distributed computing engine (Spark) or streaming computing framework (Flink).
- Experience with Public Cloud like GCP.
- Experience in one of the following programming languages: Python/Java.