Farid Kazimov
  • about
  • blog (current)
  • publications
  • projects
  • certifications
  • repositories
  • cv
  • GenAI
  • •

  • ComputerVision
  • •

  • LLM
  • •

  • Transformers
  • •

  • YOLO
  • •

  • NLP
  • •

  • external-services
  • Retrieval-Augmented Generation (RAG): Building Knowledge-Aware LLM Systems

    An advanced deep dive into Retrieval-Augmented Generation (RAG), covering architecture, embeddings, vector databases, and real-world trade-offs.

    4 min read   ·   March 29, 2026

    2026   ·   RAG   LLM   Embeddings   VectorDB   NLP   ·   blog

  • Fine-Tuning Large Language Models: From Full Training to Parameter-Efficient Methods

    An advanced deep dive into fine-tuning LLMs, covering full fine-tuning, PEFT methods like LoRA, and real-world trade-offs.

    4 min read   ·   March 26, 2026

    2026   ·   LLM   Fine-Tuning   LoRA   PEFT   Deep Learning   ·   blog

  • Encoder vs Decoder: Understanding BERT, GPT and Modern LLM Architectures

    A deep dive into encoder-only, decoder-only, and encoder-decoder architectures, and how models like BERT, GPT, and BART differ.

    4 min read   ·   March 25, 2026

    2026   ·   LLM   Transformers   BERT   GPT   Encoder   Decoder   ·   blog

  • A Deep Dive into Attention: Self-Attention, Multi-Head Attention and Positional Encoding

    A comprehensive guide to attention mechanisms in Transformers, including intuition, QKV, self-attention, multi-head attention, and positional encoding.

    4 min read   ·   March 24, 2026

    2026   ·   Attention   Transformers   LLM   Deep Learning   ·   blog

  • Transformer Architecture Explained: Attention is All You Need

    A deep dive into Transformer architecture, including encoder-decoder structure, attention mechanism, positional encoding, and multi-head attention.

    3 min read   ·   March 22, 2026

    2026   ·   Transformers   LLM   Attention   Deep Learning   ·   blog

  • <
  • 1
  • 2
  • >
© Copyright 2026 Farid Kazimov.