AW

Agnieszka W.

Vetted Crafter

Staff Software Engineer – MVP & New Features

🇵🇱Poland·9 years experience
Available in 2 weeksFull-timeContract

I ship MVPs fast and build features that users love — Cursor and AI tools accelerate my work.

About

I build MVPs and product features fast. I have shipped 8 products from zero to first users in 4–8 weeks using Next.js, Supabase, and Vercel. Cursor accelerates my work: I use it to scaffold apps, generate components, and iterate on UI. I care about shipping something usable quickly, then improving based on real feedback.

My stack is Next.js, Supabase, Stripe, and Tailwind. I have built B2B tools, community platforms, and SaaS products. I also integrate AI features where they add value — chatbots, search, summarisation — using OpenAI and Claude APIs. The goal is always a product users love, delivered fast.

AI Expertise

MvpNew Features
TypeScriptNext.jsSupabaseVercelStripeCursorTailwind

Notable Projects

Seldon-Based Multi-Model Serving Platform

Designed and deployed a Seldon Core v2 platform on GKE supporting canary deployments, A/B testing, and ensemble model graphs for a fintech client running 40+ models in production. Built custom inference graphs with pre/post-processing pipelines and drift detection using Alibi Detect.

Seldon CoreKubernetesGKEPythonMLflowPrometheusGrafana

Standardized deployment workflow across 40 models; canary rollouts reduced production incidents from 8/quarter to 1/quarter; P99 inference latency improved 35%.

DVC-Based Dataset Versioning System

Built a data versioning and experiment reproducibility platform using DVC, MLflow, and a custom data lineage tracker. Implemented DVC pipelines for automated feature engineering runs triggered by data changes, with full reproducibility from raw data to trained model artifact.

DVCMLflowPythonAWS S3GitAirflow

Reduced time to reproduce any historical experiment from 3 days to 45 minutes; compliance team can now audit any model decision back to the exact training data snapshot.

LLM Inference Optimization with vLLM

Migrated an internal LLM serving setup from naive HuggingFace generate() calls to a vLLM-based inference server with continuous batching and PagedAttention. Deployed Mistral 7B and Llama 3 8B models on A100 instances with AWQ quantization for cost optimization.

vLLMMistralPyTorchKubernetesNVIDIA TritonPython

Throughput increased 12x over baseline; monthly inference cost dropped from $28K to $6K for the same workload; P95 latency improved from 4.2s to 680ms.

Work Experience

Staff Platform Engineer – ML Infrastructure

Allegro

2020 – Present

Lead the ML serving platform team at Poland's largest e-commerce platform. Own model serving infrastructure, experiment tracking standards, and the LLM infrastructure strategy.

Senior DevOps / MLOps Engineer

ING Tech Poland

2016 – 2020

Built CI/CD infrastructure for banking software and transitioned into ML model deployment. Designed the first Kubernetes-based model serving infrastructure for the bank's credit scoring team.

Education & Certifications

🎓

M.Sc. Computer Science

Warsaw University of Technology · 2015

🏆 Certified Kubernetes Administrator (CKA)🏆 Certified Kubernetes Application Developer (CKAD)🏆 Google Professional Cloud DevOps Engineer

Interested in working with Agnieszka?

Tell us about your project and we'll facilitate an introduction.