Churn Prediction End-to-End ML
Complete machine learning pipeline for customer churn prediction with automated data processing, model training, and deployment to AWS ECS.
View Project →Building production-ready machine learning systems with modern MLOps practices. Specialized in end-to-end ML pipelines, AWS cloud infrastructure, and automated deployment workflows. Turning data into actionable insights and scalable solutions.
Passionate ML Engineer with expertise in building end-to-end machine learning solutions. I specialize in transforming complex data problems into production-ready systems using modern MLOps practices.
My work focuses on designing scalable ML pipelines, implementing automated deployment workflows, and leveraging cloud infrastructure to deliver robust and maintainable solutions.
I believe in writing clean, efficient code and following best practices to ensure models transition smoothly from development to production environments.
I don't start with models — I start with business context.
Before writing a single line of code, I focus on: what decision are we trying to improve, what metric actually drives revenue or cost, what constraints exist (budget, latency, infra limits), and how will this model be consumed.
I break the problem into three layers:
This ensures I build systems that are not just accurate, but deployable, maintainable, and ROI-positive.
I treat ML systems as production software, not experiments. I focus on:
If a model improves accuracy by 5% but increases infra cost by 40%, it's not a win. My goal is to improve performance while maintaining or optimizing cost-efficiency.
Cost optimization starts at the architecture level. I focus on:
I design pipelines that scale horizontally only when needed. ML systems should scale with demand — not sit idle consuming budget.
I design ML systems with MLOps principles:
Reliability is not optional in production ML. If the system cannot be monitored, versioned, and rolled back — it is not production-ready.
My skillset spans across three critical layers:
This allows me to take ownership from experimentation to scalable deployment. I bridge the gap between data science and production engineering.
Technical solutions must be translated into business language. When communicating with stakeholders, I:
For example: instead of saying "The F1-score improved by 4%," I explain: "This reduces false approvals by 12%, saving approximately X per month." Clear communication builds trust.
I combine engineering discipline, a production-first mindset, cost-awareness, structured thinking, and clear communication. I don't just build models — I build systems that are scalable, measurable, and maintainable.
I approach every project with the mindset: "How does this create long-term value for the organization?"
I prioritize:
A model that works today but fails silently in three months is a liability. Sustainability is part of the engineering process.
I evaluate risk in three areas: data drift, model bias, and infrastructure failure.
Mitigation strategies include:
Production ML is risk management as much as modeling.
I use AI regularly to improve development speed — especially for boilerplate code, refactoring, testing, and documentation. It helps me work roughly 50–60% faster.
However, AI is an accelerator, not a decision-maker.
Every line of generated code is manually reviewed, validated, and tested before use. System design, architectural decisions, trade-offs, and business impact are always determined by problem context — not by AI output.
Complete machine learning pipeline for customer churn prediction with automated data processing, model training, and deployment to AWS ECS.
View Project →End-to-end retail demand forecasting platform with PySpark on AWS EMR for 125M+ rows, XGBoost training, MLflow tracking, FastAPI serving, and Evidently drift monitoring.
View Project →Automated ML pipeline with GitHub Actions for continuous integration and deployment. Features automated testing, model versioning, and containerized deployment.
View Project →AI-powered system for automatic code analysis and documentation generation. Accelerates developer onboarding, reduces knowledge silos, and generates real-time architectural insights using large language models.
View Project →Cutting-edge machine learning system coming soon. Stay tuned for details about this exciting new project featuring production-grade ML infrastructure and advanced algorithms.
Next-generation enterprise AI solution in development. Combining latest advances in AI/ML with enterprise-grade deployment and scalability. More information coming soon.
Comprehensive expertise across ML engineering, cloud infrastructure, and software development
Production-grade ML system deployed on AWS infrastructure
Data Storage
MySQL Database
Spark Processing
Model Training
Container Registry
Endpoint Deployment
Monitoring & Logs
From raw data challenges to production-grade AWS deployment.
Mukta Mart · Full-time
Jul 2023 - Nov 2025 · 2 yrs 5 mos
Chattogram, Bangladesh · On-site
Machine learning and MLOps, PySpark / Big Data handling, SQL, CI/CD, AWS cloud services
Mukta Mart · Junior Analyst
Mar 2021 - Jun 2023 · 2 yrs 4 mos
Chattogram, Bangladesh · On-site
PySpark / Big Data, Amazon Web Services (AWS), Tableau