Open to work
🤖 AI Specialist

Neural Networks & Deep Learning

I build |

Passionate about neural network development and deep learning. I work with modern frameworks to build and optimize intelligent systems. Always exploring new techniques in AI.

Deep Learning
CNN, RNN, Transformers, LLMs
Neural Networks
TensorFlow, PyTorch, JAX
AI Research
Model Optimization, NLP, CV

By The Numbers

My journey in AI and deep learning

100+
Models Trained
15+
Years Experience
50+
Research Papers
99%
Accuracy Rate
500+
GitHub Stars
30+
Open Source
1000+
Models Deployed

Tech Ecosystem

Interactive graph of my technical expertise (Hover nodes)

Currently Exploring

Latest interests and ongoing research

🤖

Multimodal AI

Exploring vision-language models and their applications in real-world scenarios

CLIP BLIP Flamingo

Efficient AI

Researching model compression, quantization, and efficient training techniques

LoRA Pruning Distillation
🔬

Reinforcement Learning

Implementing RLHF and exploring applications in autonomous systems

PPO DQN RLHF
🌐

Edge AI

Deploying neural networks on edge devices and optimizing for real-time inference

TensorRT ONNX CoreML

Specializations

Key areas of expertise in AI and machine learning

Computer Vision
Image recognition, object detection, semantic segmentation
NLP
Transformers, embeddings, language models, text generation
Reinforcement Learning
Policy optimization, agent training, reward systems
Model Optimization
Quantization, pruning, knowledge distillation, inference
Time Series
Forecasting, anomaly detection, sequential data analysis
Generative Models
GANs, diffusion models, VAE, content generation

Recent Projects

Latest AI research and development work

2024
Advanced LLM Fine-tuning Framework
Developed a production-ready framework for efficient fine-tuning of large language models with LoRA and QLoRA techniques. Achieved 40% reduction in training time.
PyTorch LLM Optimization
2024
Vision Transformer Enhancement
Implemented multi-scale Vision Transformer architecture achieving 96.5% accuracy on ImageNet. Published findings in peer-reviewed journal.
Computer Vision Transformers Research
2023
Real-time Anomaly Detection System
Built an unsupervised anomaly detection pipeline using autoencoder and isolation forests. Deployed to production handling 1M+ events/day.
Time Series Production Deployment
2023
Multimodal Learning Pipeline
Developed a fusion architecture combining text and image embeddings for semantic understanding. Used in recommendation system serving 100K+ users.
Multimodal NLP CV

Case Studies & Impact

Selected real-world projects with measurable business outcomes

E-commerce · Recommendation System
Personalized Ranking with Transformer-based Recommender

Designed and deployed a ranking model based on transformer architecture to personalize product feeds for a large e-commerce platform.

CTR Uplift
+18.7%
Over 60 days A/B test
Revenue / Session
+11.3%
Same traffic, higher monetization
Stack: PyTorch · Transformers · Feature Store · Online Inference
Computer Vision · Industrial QA
Real-time Visual Defect Detection Pipeline

Built a CNN-based inspection system for production line quality control with strict latency requirements.

Accuracy
87% → 96.2%
Reduction of false negatives by 54%
Latency
3.2s → 0.8s
Optimized for edge GPUs
Stack: ONNX · Quantization · TensorRT · Edge Deployment
NLP · Customer Support Automation
LLM-based Triage & Response Assistant

Implemented an LLM-driven assistant to triage and generate draft replies for customer requests in multiple languages.

Agent Handling Time
-32%
Time to first response
Automation
~42%
Tickets fully handled by assistant
Stack: LLM · LoRA/QLoRA · Retrieval-Augmented Generation

Open Source Projects

Selected repositories and tools maintained in the open

llm-finetune-kit
LLM · Training

Modular framework for fine-tuning and evaluating LLMs with LoRA/QLoRA, mixed precision, and experiment tracking.

vision-metrics-lab
CV · Evaluation

Toolkit for robust evaluation of computer vision models: calibration, robustness, dataset shift analysis.

go-ml-serving
Serving · Go

Lightweight high-performance ML inference server written in Go with gRPC/HTTP APIs and Prometheus metrics.

Research & Publications

Peer-reviewed work in deep learning, computer vision, and NLP

NeurIPS 2023
Efficient Low-Rank Adaptation for Large Language Models in Resource-Constrained Settings
S. Parshin, A. Ivanov, E. Morozova
LLM · Optimization Best Paper Honorable Mention

Proposes a unified framework for low-rank adaptation of large language models, achieving up to 4× memory savings with minimal loss in accuracy across diverse downstream tasks.

CVPR 2022
Multi-Scale Vision Transformers for Robust Open-World Recognition
S. Parshin, D. Kuznetsov
Computer Vision · ViT Top 5% Cited in Track

Introduces a multi-scale transformer backbone for visual recognition that maintains high accuracy under severe distribution shifts and label noise.

ICML 2021
Unsupervised Time-Series Anomaly Detection via Hybrid Latent Representations
S. Parshin, M. Volkova
Time Series · Anomaly Detection Industrial track

Combines variational autoencoders and isolation-based methods for scalable anomaly detection on streaming telemetry, deployed in production on high-throughput systems.

Career Journey

Evolution of expertise over 15+ years in AI and machine learning

2024
Independent AI Consultant
Self-employed
Advising companies on AI strategy and executing high-impact projects
2021-2023
Senior AI Research Engineer
TechAI Solutions
Led research team on generative models and large language models
2018-2020
ML Engineer
DataInnovate Inc
Developed computer vision and NLP systems for production environments
2017-2018
Junior Data Scientist
StartupAI
Built recommendation systems and conducted market analysis
2014-2018
Research Intern / Graduate Studies
Novosibirsk State Technical University
Published research in neural networks and deep learning