PB.
AboutExperienceProjectsSkillsCredentialsEducationResumeContact
Open to opportunities

Hey, I'm Pavan

Software engineer passionate about building reliable infrastructure and AI systems that make a real difference — from financial platforms handling millions of daily transactions to intelligent search systems that help researchers find what they need.

Pavan Babu Bheesetti
3+
Years Exp
99.9%
Uptime
$300K+
Savings
1M+
Daily Records
PythonAWSDockerKubernetesFastAPITensorFlowLangChainPostgreSQLRAGChromaDBSparkKafkaAirflowCI/CDPyTorchPythonAWSDockerKubernetesFastAPITensorFlowLangChainPostgreSQLRAGChromaDBSparkKafkaAirflowCI/CDPyTorch
About
Hey, I'm Pavan

I'm a software engineer who believes the best technology is the kind you don't notice — it just works, reliably, at whatever scale you throw at it. That belief has shaped every system I've built.

My journey started with electrical engineering in India, where I first fell in love with how systems work under the hood. That curiosity brought me to the U.S. to pursue a Master's in Data Science at UT Arlington, where I dove deep into distributed systems, machine learning, and cloud computing.

Between my studies, I spent close to two years at Transcend Street, a fintech company where I got to work on the kind of problems I care most about — building data infrastructure that treasury teams depend on every single day, designing settlement systems that talk to some of the biggest financial institutions in the world, and making sure none of it breaks when the market opens.

Today, I'm at SEAR Lab volunteering as an AI Research Engineer, building semantic search systems that help researchers navigate thousands of files using RAG and vector embeddings. I'm drawn to the space where cloud infrastructure meets applied AI — where you're not just training a model, but shipping a product that people rely on.

I'm looking for my next role where I can keep building technology that matters — whether that's at a startup moving fast or a larger team solving hard problems at scale.

RAG & Semantic Search
Cloud Infrastructure
Fintech & Settlement
M.S. Data Science — UTA
AWS Certified
Published Researcher
Experience
Where I've shipped code
AI Research EngineerSEAR LabVolunteer
Jan 2026 – Present
  • Created an internal knowledge assistant using Google Apps Script and Google Drive API to consolidate 5,000+ research files into a centralized, searchable system, reducing manual lookup time by 60%.
  • Engineered semantic search with vector embeddings and ChromaDB, enabling relevance-based ranking with metadata filters and improving response accuracy by 25%.
  • Developing automated content processing workflows (Apps Script, Python) to convert diverse file formats into structured JSON with enriched metadata for indexed retrieval.
  • Containerized backend services with Docker and implemented CI/CD pipelines for controlled releases on Linux.
  • Beyond the resume: Designed the system architecture from scratch — chose the embedding model (all-MiniLM-L6-v2), set up the vector store, and built the retrieval pipeline end-to-end as a solo contributor.
Data EngineerTranscend Street
Aug 2022 – Jan 2024
  • Architected AWS-based financial settlement ingestion platform (Python, SQL, EC2, S3) processing 200K–1M+ daily transactions for treasury reconciliation workflows, maintaining 99.9% data availability during market hours.
  • Designed a collateral optimization engine (liquidity waterfall + what-if simulations) using Python/C++ services and SQL models, enabling treasury desks to minimize funding usage and generating $100K–$300K quarterly savings.
  • Built replayable ETL pipelines with Apache Airflow orchestration, implementing checkpointing and deterministic keys to handle late-arriving trades and historical backfills, eliminating duplicate settlements and reducing manual corrections.
  • Implemented transfer pricing calculations (benchmark, security class, haircut, FX) through Python services and REST APIs with validation layers, reducing pricing disputes by 30% and shortening resolution cycles to under 1 hour.
  • Integrated Triparty and CCP settlement flows (BNY Mellon, Euroclear, J.P. Morgan) using secure API and file interfaces with automated reconciliation checks, reducing manual bookings by 40–60% and settlement breaks by 20–35%.
  • Built intraday liquidity analytics pipelines (SOD, intraday, EOD views) using SQL aggregation services and dashboard feeds, enabling faster funding decisions and lowering financing costs by 3–5%.
  • Partnered with treasury, risk, and engineering teams to build data infrastructure supporting large-scale settlement operations, enabling faster reconciliation and more reliable financial reporting.
  • Beyond the resume: Was the go-to person for production incidents during market hours — debugged data pipeline failures in real-time while settlements were actively processing, often under pressure from trading desks.
  • Beyond the resume: Designed the schema and data contracts for cross-system integration with three major clearinghouses, navigating different file formats (FpML, SWIFT, CSV) and reconciliation logic.
  • Beyond the resume: Mentored a junior engineer on Airflow best practices and helped the team adopt infrastructure-as-code patterns for pipeline deployment.
Projects
Things I've built
Skills
Tech stack
Credentials
Certifications & Publications
Education
Academic background
UTA
University of Texas at Arlington
M.S. Mathematics & Computer Science (Data Science)
Coursework: Distributed Systems, Cloud Computing, ML, Statistics, Data Structures, OOPS
Jan 2024 – Dec 2025
3.7 / 4.0
AU
Andhra University
B.E. Electrical Engineering
Jul 2018 – May 2022
3.3 / 4.0
Contact
Let's build something together

I'm looking for SDE, AI/ML, and Data Engineering roles where I can build technology that makes an impact. Whether it's an opportunity, a collaboration, or just a good tech conversation — I'd love to hear from you.