We are
We're building the financial infrastructure that powers global innovation. With our cutting-edge suite of embedded payments, cards, and lending solutions, we enable millions of businesses and consumers to transact seamlessly and securely.
The Role
We are looking for a backend engineer who can design, build, and operate highly reliable Node.js services on AWS that enable generative?AI capabilities across our products and internal workflows.
You will create scalable APIs, data pipelines, and serverless architectures that integrate large?language?model (LLM) services such as Amazon Bedrock, OpenAI, and open?source models, enabling teams to safely and efficiently leverage generative AI.
Who You Are
You have experience building Retrieval?Augmented Generation (RAG) systems or knowledge?base chatbots.
You're Hands?on with vector databases such as Pinecone, Chroma, or pgvector on Postgres/Aurora.
Have AWS certification (Developer, Solutions Architect, or Machine Learning Specialty).
Experience with observability tooling (Datadog, New Relic) and cost?optimization strategies for AI workloads.
Background in microservices, domain?driven design, or event?sourcing patterns.
What You’ll Be Doing
Design and implement REST/GraphQL APIs in Node.js/TypeScript to serve generative?AI features such as chat, summarization, and content generation.
Build and maintain AWS?native architectures using Lambda, API Gateway, ECS/Fargate, DynamoDB, S3, and Step Functions.
Integrate and orchestrate LLM services (Amazon Bedrock, OpenAI, self?hosted models) and vector databases (Amazon Aurora pgvector, Pinecone, Chroma) to power Retrieval?Augmented Generation (RAG) pipelines.
Create secure, observable, and cost?efficient infrastructure as code (CDK/Terraform) and automate CI/CD with GitHub Actions or AWS CodePipeline.
Implement monitoring, tracing, and logging (CloudWatch, X?Ray, OpenTelemetry) to track latency, cost, and output quality of AI endpoints.
Collaborate with ML engineers, product managers, and front?end teams in agile sprints; participate in design reviews and knowledge?sharing sessions.
Establish best practices for prompt engineering, model evaluation, and data governance to ensure responsible AI usage.
















