AI Engineer

Country: Malaysia & Indonesia

Job description

Summary

We are seeking a talented and motivated AI Engineer in Malaysia and Indonesia to join our dynamic team. The ideal candidate will have hands-on experience building LLM-powered applications, implementing agentic AI workflows, and integrating AI into production environments. You will work on projects involving LangChain, LangGraph, LangFlow, LangFuse, vector databases, embeddings, prompt engineering, and multi-agent systems. A strong understanding of both AI model orchestration and application-level deployment is essential. If you are passionate about AI, automation, and delivering production-ready intelligent systems, we would love to meet you. Please reach out to our Talent Acquisition at lutfi.yulia@develab.io

Key Responsibilities
  • Design, implement, and maintain AI-powered applications using LLMs (e.g., OpenAI, Anthropic, Mistral, LLaMA, Claude, Gemini).
  • Develop agentic AI workflows with LangChain and LangGraph, enabling multi-step reasoning, memory, and dynamic tool usage.
  • Create interactive AI applications using LangFlow for visual orchestration and LangFuse for observability, debugging, and analytics.
  • Integrate vector databases (Pinecone, Weaviate, Milvus, ChromaDB, etc.) for embedding storage, semantic search, and retrieval-augmented generation (RAG).
  • Build robust prompt engineering frameworks and prompt optimization strategies to ensure accuracy, reliability, and consistency in AI responses.
  • Implement and optimize retrieval pipelines combining embeddings, search algorithms, and metadata filtering.
  • Develop multi-agent AI systems with specialized roles, inter-agent communication, and autonomous task planning.
  • Utilize Docker and Kubernetes for containerization, orchestration, and scaling AI workloads across environments.
  • Collaborate with backend teams to integrate AI services via RESTful APIs or gRPC.
  • Monitor, log, and fine-tune AI models in production using observability tools like LangFuse, Weights & Biases, or OpenTelemetry.
  • Apply AI safety best practices, guardrails, and policy-based filtering to ensure responsible deployment.
  • Conduct performance tuning of AI pipelines for latency, throughput, and cost optimization.
  • Stay up-to-date with emerging AI frameworks, research papers, and model releases.
Qualifications
  • Proven experience in building LLM applications with frameworks like LangChain, LangGraph, and LangFlow.
  • Strong knowledge of agentic AI concepts, including planning, reasoning, tool usage, and long-term memory.
  • Hands-on experience with LangFuse or similar AI observability platforms.
  • Proficiency in Python (FastAPI, Flask, or Django) for backend integration of AI services.
  • Experience with vector databases (Pinecone, Weaviate, Milvus, ChromaDB) and embedding models.
  • Understanding of RAG pipelines, semantic search, and knowledge base construction.
  • Experience with Docker and Kubernetes for deploying scalable AI applications.
  • Familiarity with CI/CD pipelines (GitLab, GitHub Actions) for AI deployment.
  • Knowledge of prompt engineering techniques and model fine-tuning workflows.
  • Strong debugging skills with tools like LangFuse, Postman, and Python debuggers.
  • Ability to work in a remote, fast-paced environment, both independently and collaboratively.
Must have technical skills
  • LLM frameworks: LangChain, LangGraph, LangFlow
  • Agentic AI concepts: planning, reasoning, tool usage, memory
  • Observability: LangFuse or similar
  • Backend: Python (FastAPI, Flask, Django)
  • Vector databases: Pinecone, Weaviate, Milvus, ChromaDB
  • RAG pipelines, semantic search
  • Containerization & orchestration: Docker, Kubernetes
  • CI/CD pipelines: GitLab, GitHub Actions
  • Prompt engineering & model fine-tuning
  • Debugging tools: LangFuse, Postman, Python debuggers
Good to have technical skills
  • Model fine-tuning (LoRA, PEFT, QLoRA)
  • Multi-modal AI applications (text, image, audio)
  • Agentic orchestration frameworks: CrewAI, AutoGPT, BabyAGI, OpenAI Assistants API
  • AI cost optimization strategies (token budgeting, hybrid model routing)
  • Cloud AI services: AWS Bedrock, Azure AI, Google Vertex AI
  • Open-source AI project contributions
Must have soft skills
  • Strong communication skills to articulate complex AI concepts to technical and non-technical stakeholders
  • Ability to work independently and collaboratively in a remote, fast-paced environment

Other Roles

Explore opportunities across our Product Engineering and Sales & Operations teams.