How I rescued a European company defrauded by an agency. Migration from PHP 5.6 and jQuery to React, Django, PostgreSQL, Redis and Rust. Code, metrics and lessons.
EN

How I rescued a company defrauded by an agency: rebuilding on a modern technology stack

5.00 /5 - (31 votes )
Last verified: May 1, 2026
19min read
Case study
Full-stack developer

He called me on a Friday evening. His voice was calm, but I could hear the exhaustion of a man who had spent weeks fighting a problem he had no control over. He runs a service company serving clients in four European countries. He paid an agency for a professional web platform. He got something that looked like a professional platform. Underneath was a disaster.

Due to a confidentiality agreement I cannot reveal the name of the company or its industry. What I can do is describe exactly what I found, what I did and why I chose the technologies I chose. This story is a warning for anyone who outsources platform development to an external agency.


#What I found after auditing the agency’s platform

The first step is always an audit. I don’t judge, I don’t criticise. I gather facts. After three days of analysis I had a complete picture.

Spaghetti code on PHP 5.6: the agency used no framework whatsoever. The entire platform is monolithic procedural PHP 5.6 code (end of life in 2018) with SQL queries pasted directly into HTML templates. No ORM, no abstraction layer, no router. Files of 3,000 lines mixing business logic with presentation.

// Found code -- SQL query directly in a template (anonymised)
<?php
$result = mysql_query("SELECT * FROM services 
    WHERE category = '" . $_GET['cat'] . "' 
    ORDER BY id DESC");
// SQL injection -- no input validation whatsoever
while ($row = mysql_fetch_assoc($result)) {
    echo "<div class='service'>";
    echo "<h2>" . $row['title'] . "</h2>"; // XSS -- no escaping
    echo "<p>" . $row['description'] . "</p>";
    echo "</div>";
}
?>

MySQL 5.5 without indexes: a database with 47 tables, none of them having any indexes beyond primary keys. A query listing services with filters performed a full table scan on 200,000 records, with an average response time of 4.7 seconds.

jQuery 1.x + Bootstrap 3: a frontend from 2014. Twelve jQuery files loaded on every page, including three different versions of the library. No minification, no bundler, no tree-shaking. Total script weight: 2.8 MB.

FTP as “deployment”: no Git repository, no CI/CD, no staging environment. The agency pushed files directly via FTP to the production server. No version control system. No tests.

Zero security: user passwords stored in MD5 without salt. Sessions stored in files on a shared server. SQL injection in 23 places. XSS in forms. No CSRF tokens. No HTTPS on the login panel.

#Baseline metrics

MetricValueAssessment
PageSpeed (mobile)18Critical
LCP12.4sCritical
INP1100msCritical
CLS0.52Critical
TTFB4.7sCritical
Page weight11.2 MBExcessive
API response time4.7s (avg)Critical
Organic traffic-72% YoYCritical
Detected vulnerabilities23 SQL injection, 14 XSSCritical

The worst part was that the client knew nothing about any of these problems. For a year the agency had been sending him reports on “optimisations” that had no basis in reality.


#Why I chose this technology stack

The decision on the target architecture is the most important moment of the project. The client had legitimate concerns: the previous agency had promised a “modern solution” and delivered code from 2014. I had to choose technologies that would solve specific problems, not ones that happened to be fashionable.

I analysed the requirements and matched tools to tasks:

Python + Django (backend API): the client needed a solid backend with an admin panel, authentication, data validation and a REST API. Django provides all of this out of the box. Django REST Framework is a mature, stable ecosystem with excellent documentation. The client serves 4 European markets, and Django has built-in internationalisation.

PostgreSQL (database): migrating from MySQL 5.5 to PostgreSQL was not a whim. PostgreSQL offers better indexes (GIN, GiST for full-text search), better data types (JSONB, arrays), mature table partitioning and reliable ACID transactions. For 200,000 records with multilingual full-text search it is the natural choice.

Redis (cache and queues): API response time from 4.7 seconds had to drop below 100 milliseconds. Redis caches query results, stores user sessions and handles asynchronous task queues (Celery). One tool, three critical functions.

React + TypeScript (interactive frontend): the client dashboard, search with filters, multi-step forms, all of this requires a reactive UI. React with TypeScript provides typed components, excellent developer tooling and a huge library ecosystem.

Rust (performance microservice): indexing the search for 200,000 records in 4 language versions, processing CSV/Excel files from clients, data transformations. These tasks required raw performance. Rust processes the search index in 1.8 seconds instead of 47 seconds in the old PHP implementation. That is not a percentage difference. It is a difference of an order of magnitude.

Astro (marketing site): the homepage, blog, services pages. These are static content that needs no JavaScript. Astro generates clean HTML with zero runtime cost. Interactive elements (search, contact form) run as isolated React islands. As a professional Astro development tool, it was the natural choice for the public-facing site.

Target architecture:

┌─────────────────────────────────────────────────────┐
│                  Cloudflare CDN                       │
├──────────────┬───────────────┬────────────────────────┤
│  Astro SSG   │  React SPA    │  Django REST API       │
│  (marketing) │  (dashboard)  │  (backend)             │
│  HTML/CSS    │  TypeScript   │  Python 3.12           │
├──────────────┴───────────────┴────────────────────────┤
│                                                       │
│  ┌─────────────┐  ┌──────────┐  ┌──────────────────┐ │
│  │ PostgreSQL   │  │  Redis   │  │  Rust service    │ │
│  │ (primary DB) │  │  (cache) │  │  (search index,  │ │
│  │              │  │  (queue) │  │   data processing│ │
│  └─────────────┘  └──────────┘  └──────────────────┘ │
│                                                       │
│  ┌─────────────────────────────────────────────────┐ │
│  │  Python AI pipeline (content processing, NLP)    │ │
│  └─────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────┘

#Backend API in Django REST Framework

The heart of the new platform is Django with Django REST Framework. I built an API handling a multilingual service catalogue, a client enquiry system, JWT authentication and an admin panel.

# services/models.py -- service model with multilingual support
from django.db import models
from django.contrib.postgres.indexes import GinIndex
from django.contrib.postgres.search import SearchVectorField

class Service(models.Model):
    slug = models.SlugField(max_length=200, unique=True)
    category = models.ForeignKey(
        'Category', on_delete=models.PROTECT, related_name='services'
    )
    is_active = models.BooleanField(default=True)
    created_at = models.DateTimeField(auto_now_add=True)
    updated_at = models.DateTimeField(auto_now=True)

    class Meta:
        ordering = ['-created_at']

class ServiceTranslation(models.Model):
    service = models.ForeignKey(
        Service, on_delete=models.CASCADE, related_name='translations'
    )
    language = models.CharField(max_length=5, choices=[
        ('pl', 'Polski'), ('en', 'English'),
        ('de', 'Deutsch'), ('fr', 'Français'),
    ])
    title = models.CharField(max_length=200)
    description = models.TextField()
    meta_title = models.CharField(max_length=70)
    meta_description = models.CharField(max_length=160)
    search_vector = SearchVectorField(null=True)

    class Meta:
        unique_together = ['service', 'language']
        indexes = [
            GinIndex(fields=['search_vector']),
            models.Index(fields=['language', 'service']),
        ]


# services/serializers.py -- serializer with validation
from rest_framework import serializers

class ServiceSerializer(serializers.ModelSerializer):
    translations = ServiceTranslationSerializer(many=True, read_only=True)
    category_name = serializers.CharField(
        source='category.name', read_only=True
    )

    class Meta:
        model = Service
        fields = [
            'id', 'slug', 'category_name',
            'is_active', 'translations', 'created_at',
        ]


# services/views.py -- API views with Redis cache
from django.utils.decorators import method_decorator
from django.views.decorators.cache import cache_page
from rest_framework import viewsets, filters
from django_filters.rest_framework import DjangoFilterBackend

class ServiceViewSet(viewsets.ReadOnlyModelViewSet):
    queryset = Service.objects.filter(
        is_active=True
    ).select_related(
        'category'
    ).prefetch_related(
        'translations'
    )
    serializer_class = ServiceSerializer
    filter_backends = [DjangoFilterBackend, filters.SearchFilter]
    filterset_fields = ['category__slug']
    search_fields = ['translations__title', 'translations__description']

    @method_decorator(cache_page(60 * 15))  # Cache 15 minut
    def list(self, request, *args, **kwargs):
        return super().list(request, *args, **kwargs)

Redis configuration as the cache backend and Celery queue broker:

# settings.py -- Redis configuration
CACHES = {
    'default': {
        'BACKEND': 'django_redis.cache.RedisCache',
        'LOCATION': 'redis://127.0.0.1:6379/0',
        'OPTIONS': {
            'CLIENT_CLASS': 'django_redis.client.DefaultClient',
            'SERIALIZER': 'django_redis.serializers.json.JSONSerializer',
            'CONNECTION_POOL_KWARGS': {'max_connections': 50},
        },
        'KEY_PREFIX': 'platform',
        'TIMEOUT': 900,  # 15 minut domyślnie
    }
}

# Sesje w Redis (szybsze niż baza danych)
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
SESSION_CACHE_ALIAS = 'default'

# Celery z Redis jako broker
CELERY_BROKER_URL = 'redis://127.0.0.1:6379/1'
CELERY_RESULT_BACKEND = 'redis://127.0.0.1:6379/2'
CELERY_TASK_SERIALIZER = 'json'

Impact after deploying Django + PostgreSQL + Redis: average API response time dropped from 4.7 seconds to 45 milliseconds. Queries served from the Redis cache handled in 3-5 milliseconds.


#Rust microservice for search indexing

The most interesting technical challenge was the search engine. The client has a catalogue of 200,000 records in 4 language versions. The old PHP implementation executed LIKE '%term%' on MySQL without indexes, taking 47 seconds per query. Unusable.

PostgreSQL with GIN indexes and tsvector solved the problem for standard queries. But the client also needed:

  • search with typo tolerance (fuzzy matching),
  • filtering by multiple attributes simultaneously with instant results,
  • index rebuilding after importing data from CSV/Excel files.

For these tasks I built a microservice in Rust using the Tantivy library (the Rust equivalent of Apache Lucene):

// search-service/src/indexer.rs -- search indexing in Rust
use tantivy::{
    schema::*, doc, Index, IndexWriter,
    tokenizer::NgramTokenizer,
};
use serde::Deserialize;
use std::time::Instant;

#[derive(Deserialize)]
pub struct ServiceRecord {
    pub id: i64,
    pub slug: String,
    pub title: String,
    pub description: String,
    pub category: String,
    pub language: String,
    pub attributes: Vec<String>,
}

pub struct SearchIndexer {
    index: Index,
    schema: Schema,
}

impl SearchIndexer {
    pub fn new(index_path: &str) -> Result<Self, Box<dyn std::error::Error>> {
        let mut schema_builder = Schema::builder();

        schema_builder.add_i64_field("id", STORED | INDEXED);
        schema_builder.add_text_field("slug", STORED);
        schema_builder.add_text_field("title", TEXT | STORED);
        schema_builder.add_text_field("description", TEXT | STORED);
        schema_builder.add_text_field("category", STRING | STORED);
        schema_builder.add_text_field("language", STRING | STORED);
        schema_builder.add_text_field("attributes", TEXT | STORED);
        // N-gram field for fuzzy/partial matching
        schema_builder.add_text_field("title_ngram", TEXT);

        let schema = schema_builder.build();
        let index = Index::create_in_dir(index_path, schema.clone())?;

        // Register n-gram tokenizer for typo tolerance
        let ngram_tokenizer = NgramTokenizer::new(2, 4, false)
            .expect("Failed to create ngram tokenizer");
        index
            .tokenizers()
            .register("ngram", ngram_tokenizer);

        Ok(Self { index, schema })
    }

    pub fn build_index(
        &self,
        records: Vec<ServiceRecord>,
    ) -> Result<usize, Box<dyn std::error::Error>> {
        let start = Instant::now();
        let mut writer: IndexWriter = self.index.writer(128_000_000)?; // 128MB buffer

        let title_field = self.schema.get_field("title").unwrap();
        let description_field = self.schema.get_field("description").unwrap();
        let title_ngram_field = self.schema.get_field("title_ngram").unwrap();

        let count = records.len();
        for record in records {
            writer.add_document(doc!(
                self.schema.get_field("id").unwrap() => record.id,
                self.schema.get_field("slug").unwrap() => record.slug,
                title_field => record.title.clone(),
                description_field => record.description,
                self.schema.get_field("category").unwrap() => record.category,
                self.schema.get_field("language").unwrap() => record.language,
                self.schema.get_field("attributes").unwrap() =>
                    record.attributes.join(" "),
                title_ngram_field => record.title,
            ))?;
        }

        writer.commit()?;
        let duration = start.elapsed();
        println!(
            "Indexed {} records in {:.2}s",
            count,
            duration.as_secs_f64()
        );

        Ok(count)
    }
}

HTTP API in Rust with the Actix-web framework:

// search-service/src/main.rs -- search API
use actix_web::{web, App, HttpServer, HttpResponse};
use serde::{Deserialize, Serialize};

#[derive(Deserialize)]
struct SearchQuery {
    q: String,
    lang: Option<String>,
    category: Option<String>,
    limit: Option<usize>,
}

#[derive(Serialize)]
struct SearchResult {
    id: i64,
    slug: String,
    title: String,
    excerpt: String,
    category: String,
    score: f32,
}

async fn search(
    query: web::Query<SearchQuery>,
    indexer: web::Data<SearchIndexer>,
) -> HttpResponse {
    let limit = query.limit.unwrap_or(20);
    let lang = query.lang.as_deref().unwrap_or("pl");

    let results = indexer.search(
        &query.q, lang, query.category.as_deref(), limit
    );

    HttpResponse::Ok().json(results)
}

#[actix_web::main]
async fn main() -> std::io::Result<()> {
    let indexer = SearchIndexer::new("./search_index")
        .expect("Failed to create indexer");
    let indexer_data = web::Data::new(indexer);

    HttpServer::new(move || {
        App::new()
            .app_data(indexer_data.clone())
            .route("/search", web::get().to(search))
            .route("/health", web::get().to(|| async {
                HttpResponse::Ok().body("ok")
            }))
    })
    .bind("127.0.0.1:8081")?
    .run()
    .await
}

Rust microservice benchmark results:

OperationOld PHPNew RustImprovement
Index build (200k records)47s1.8s26x faster
Simple search4.7s2ms2350x faster
Search with filters8.3s5ms1660x faster
Fuzzy matching (typos)None8msNew feature
Memory usage512 MB84 MB6x less

Rust was not a choice of “because it’s trendy”. It was a choice because for this specific task, processing 200,000 records with n-gram indexing and fuzzy matching, it delivers performance that interpreters simply cannot match.


#React frontend with interactive dashboard

The client dashboard required a reactive UI: sortable tables, multi-level filters, multi-step forms, charts with real-time data. React with TypeScript is the natural choice.

// src/components/ServiceSearch.tsx -- search with filters
import { useState, useCallback, useMemo } from 'react';
import { useQuery } from '@tanstack/react-query';
import { useDebounce } from '@/hooks/useDebounce';

interface SearchResult {
  id: number;
  slug: string;
  title: string;
  excerpt: string;
  category: string;
  score: number;
}

interface SearchFilters {
  category: string | null;
  language: string;
}

export function ServiceSearch({ locale }: { locale: string }) {
  const [query, setQuery] = useState('');
  const [filters, setFilters] = useState<SearchFilters>({
    category: null,
    language: locale,
  });

  const debouncedQuery = useDebounce(query, 300);

  const { data: results, isLoading } = useQuery<SearchResult[]>({
    queryKey: ['search', debouncedQuery, filters],
    queryFn: async () => {
      const params = new URLSearchParams({
        q: debouncedQuery,
        lang: filters.language,
        ...(filters.category && { category: filters.category }),
      });
      const res = await fetch(`/api/search?${params}`);
      return res.json();
    },
    enabled: debouncedQuery.length >= 2,
    staleTime: 5 * 60 * 1000, // 5 minut cache
  });

  const handleCategoryChange = useCallback((category: string | null) => {
    setFilters(prev => ({ ...prev, category }));
  }, []);

  return (
    <div className="search-container">
      <div className="relative">
        <input
          type="search"
          value={query}
          onChange={(e) => setQuery(e.target.value)}
          placeholder={locale === 'pl' ? 'Szukaj usług...' : 'Search services...'}
          className="w-full px-4 py-3 rounded-lg border border-gray-200
                     dark:border-gray-700 bg-white dark:bg-gray-800
                     focus:ring-2 focus:ring-emerald-500 focus:outline-none"
        />
        {isLoading && (
          <div className="absolute right-3 top-1/2 -translate-y-1/2">
            <span className="animate-spin h-5 w-5 border-2
                           border-emerald-500 border-t-transparent rounded-full
                           inline-block" />
          </div>
        )}
      </div>

      {results && results.length > 0 && (
        <div className="mt-4 grid gap-4 md:grid-cols-2 lg:grid-cols-3">
          {results.map((result) => (
            <a
              key={result.id}
              href={`/${locale}/${result.slug}/`}
              className="block p-4 rounded-lg border border-gray-100
                         dark:border-gray-700 hover:border-emerald-500
                         transition-colors"
            >
              <span className="text-xs font-medium text-emerald-600
                              dark:text-emerald-400">
                {result.category}
              </span>
              <h3 className="mt-1 font-semibold text-gray-900
                            dark:text-white">
                {result.title}
              </h3>
              <p className="mt-2 text-sm text-gray-600 dark:text-gray-400
                           line-clamp-2">
                {result.excerpt}
              </p>
            </a>
          ))}
        </div>
      )}
    </div>
  );
}

The React component is embedded in the Astro page as an interactive island:

---
// src/pages/[lang]/services.astro -- services page
import { ServiceSearch } from '../../components/ServiceSearch';
import Layout from '../../layouts/Layout.astro';
---

<Layout title="Usługi">
  <section class="services-hero">
    <h1>Nasze usługi</h1>
  </section>

  <!-- Wyspa React -- ładuje się przy scrollu, nie blokuje reszty strony -->
  <ServiceSearch client:visible locale={Astro.params.lang} />
</Layout>

Thanks to the Islands architecture the marketing page loads 0 KB of JavaScript by default. The search component (38 KB gzipped) loads only when the user scrolls to that section.


#AI pipeline for content processing

Migrating 1,200 content pages from dirty HTML to clean Markdown required automation. Many pages had incomplete SEO metadata, missing descriptions, sub-optimal headings. Instead of fixing them manually I built a pipeline in Python using a custom language model.

The pipeline ran in three phases:

# ai_content_pipeline.py -- content processing with AI
import json
import re
from dataclasses import dataclass, field
from pathlib import Path
from concurrent.futures import ThreadPoolExecutor

import markdownify

@dataclass
class ContentAnalysis:
    entities: list[dict] = field(default_factory=list)
    meta_description: str = ''
    heading_issues: list[str] = field(default_factory=list)
    word_count: int = 0
    is_thin: bool = False
    suggested_internal_links: list[str] = field(default_factory=list)

def clean_legacy_html(raw_html: str) -> str:
    """Phase 1: Cleaning dirty HTML from the legacy platform."""
    # Usuwanie inline styles, komentarzy warunkowych IE, pustych tagów
    cleaned = re.sub(r'style="[^"]*"', '', raw_html)
    cleaned = re.sub(r'<!--\[if.*?\]>.*?<!\[endif\]-->', '', cleaned,
                     flags=re.DOTALL)
    cleaned = re.sub(r'<(div|span|p)[^>]*>\s*</\1>', '', cleaned)
    cleaned = re.sub(r'&nbsp;', ' ', cleaned)
    return cleaned

def convert_to_markdown(html: str) -> str:
    """Phase 2: Converting HTML to Markdown."""
    return markdownify.markdownify(
        html,
        heading_style="ATX",
        strip=['script', 'style', 'iframe', 'object', 'embed'],
        convert=['h1', 'h2', 'h3', 'h4', 'p', 'a', 'img',
                 'ul', 'ol', 'li', 'strong', 'em', 'table'],
    )

def analyze_with_ai(content: str, model_client) -> ContentAnalysis:
    """Phase 3: Content analysis using a custom AI model."""
    prompt = f"""Przeanalizuj poniższą treść strony internetowej.
Zwróć JSON z polami:
- entities: lista obiektów {{name, type, relevance_score}}
- meta_description: optymalny opis meta (max 155 znaków, po polsku)
- heading_issues: lista problemów ze strukturą nagłówków
- word_count: liczba słów
- suggested_internal_links: sugerowane frazy do linkowania wewnętrznego

Treść:
{content[:4000]}"""

    response = model_client.generate(prompt, max_tokens=1024)
    data = json.loads(response.text)

    return ContentAnalysis(
        entities=data.get('entities', []),
        meta_description=data.get('meta_description', ''),
        heading_issues=data.get('heading_issues', []),
        word_count=data.get('word_count', 0),
        is_thin=data.get('word_count', 0) < 300,
        suggested_internal_links=data.get('suggested_internal_links', []),
    )

def process_batch(
    content_dir: Path, model_client, max_workers: int = 4
) -> dict:
    """Batch processing with multithreading."""
    stats = {'processed': 0, 'thin': 0, 'entities': 0}

    def process_file(md_file: Path) -> ContentAnalysis:
        content = md_file.read_text(encoding='utf-8')
        return analyze_with_ai(content, model_client)

    files = list(content_dir.glob('*.md'))
    with ThreadPoolExecutor(max_workers=max_workers) as executor:
        results = executor.map(process_file, files)
        for analysis in results:
            stats['processed'] += 1
            if analysis.is_thin:
                stats['thin'] += 1
            stats['entities'] += len(analysis.entities)

    return stats

The pipeline processed 1,247 pages in 4 hours on own infrastructure, not in the cloud, not through external APIs. It found 3,400 unique entities, generated missing meta descriptions for 680 pages and identified 89 thin-content pages requiring expansion.

The AI model ran on a dedicated server with a GPU. I did not use AWS, Azure or any cloud-managed solution. Full control over client data, zero content sent to external APIs.


#Data migration from MySQL to PostgreSQL

Migrating the database from MySQL 5.5 to PostgreSQL required not just moving data but a fundamental schema rebuild. The old schema had no relationships, no indexes and no integrity constraints.

-- Schema migration: from chaos to order

-- Old MySQL (no relationships, no indexes)
-- CREATE TABLE services (id INT AUTO_INCREMENT, title VARCHAR(255), ...);
-- CREATE TABLE categories (id INT AUTO_INCREMENT, name VARCHAR(100), ...);
-- Brak FOREIGN KEY, brak INDEX poza PK

-- New PostgreSQL with proper structure
CREATE TABLE categories (
    id SERIAL PRIMARY KEY,
    slug VARCHAR(200) UNIQUE NOT NULL,
    parent_id INTEGER REFERENCES categories(id),
    created_at TIMESTAMPTZ DEFAULT NOW()
);

CREATE TABLE services (
    id SERIAL PRIMARY KEY,
    slug VARCHAR(200) UNIQUE NOT NULL,
    category_id INTEGER NOT NULL REFERENCES categories(id),
    is_active BOOLEAN DEFAULT true,
    created_at TIMESTAMPTZ DEFAULT NOW(),
    updated_at TIMESTAMPTZ DEFAULT NOW()
);

CREATE TABLE service_translations (
    id SERIAL PRIMARY KEY,
    service_id INTEGER NOT NULL REFERENCES services(id) ON DELETE CASCADE,
    language VARCHAR(5) NOT NULL,
    title VARCHAR(200) NOT NULL,
    description TEXT NOT NULL,
    meta_title VARCHAR(70) NOT NULL,
    meta_description VARCHAR(160) NOT NULL,
    search_vector TSVECTOR,
    UNIQUE(service_id, language)
);

-- Indexes for efficient searching
CREATE INDEX idx_translations_search ON service_translations
    USING GIN(search_vector);
CREATE INDEX idx_translations_lang ON service_translations(language);
CREATE INDEX idx_services_category ON services(category_id)
    WHERE is_active = true;

-- Trigger for automatic search_vector updates
CREATE OR REPLACE FUNCTION update_search_vector()
RETURNS TRIGGER AS $$
BEGIN
    NEW.search_vector :=
        setweight(to_tsvector('simple', COALESCE(NEW.title, '')), 'A') ||
        setweight(to_tsvector('simple', COALESCE(NEW.description, '')), 'B');
    RETURN NEW;
END;
$$ LANGUAGE plpgsql;

CREATE TRIGGER trg_search_vector
    BEFORE INSERT OR UPDATE ON service_translations
    FOR EACH ROW EXECUTE FUNCTION update_search_vector();

#SEO migration: redirects and structured data

The most critical element of the migration was preserving Google rankings. Despite the old platform losing traffic, it still had hundreds of indexed URLs and dozens of backlinks.

I mapped every old URL to the new one and deployed 301 redirects at the Cloudflare Workers level:

// redirects.ts -- redirect map from the old platform
const redirectMap: Record<string, string> = {
  '/uslugi.php?id=1': '/pl/uslugi/konsulting/',
  '/services.php?id=1': '/en/services/consulting/',
  '/index.php?page=about': '/pl/o-nas/',
  '/kontakt.php': '/pl/kontakt/',
  '/leistungen.php': '/de/dienstleistungen/',
  // ... 963 redirects generated automatically
};

export function handleRedirects(url: URL): Response | null {
  // Check for exact match
  const exactMatch = redirectMap[url.pathname + url.search];
  if (exactMatch) {
    return new Response(null, {
      status: 301,
      headers: { Location: exactMatch },
    });
  }

  // Check for path-only match
  const pathMatch = redirectMap[url.pathname];
  if (pathMatch) {
    return new Response(null, {
      status: 301,
      headers: { Location: pathMatch },
    });
  }

  return null;
}

I deployed complete Schema.org structured data with hreflang for each language version. The old platform had no structured data, so Google had no understanding of what the site was about.


#Results after 4 months

MetricBeforeAfterChange
PageSpeed (mobile)1899+450%
LCP12.4s0.3s-98%
INP1100ms22ms-98%
CLS0.520.01-98%
TTFB4.7s0.03s-99%
Page weight11.2 MB0.28 MB-97%
API response time4700ms45ms-99%
Search (200k records)47s2ms23500x
Organic trafficBaseline+340%Growth
Client enquiries~2/week~14/week+600%
Security vulnerabilities370Eliminated

Full technology stack for the project:

  • Backend API: Python 3.12, Django 5, Django REST Framework, Celery
  • Database: PostgreSQL 16 with GIN indexes and full-text search
  • Cache and queues: Redis 7 (cache, sessions, Celery broker)
  • Search microservice: Rust, Actix-web, Tantivy
  • Interactive frontend: React 19, TypeScript, TanStack Query, Tailwind CSS
  • Marketing site: Astro 5 with React islands
  • AI pipeline: Python, custom language model on dedicated GPU
  • Deployment: Cloudflare Pages + Workers, GitHub Actions CI/CD
  • Monitoring: Sentry, Prometheus, Grafana

#What this project taught me

Don’t judge, diagnose. The client came wounded by a bad experience with an agency. The last thing he needed was another “expert” telling him how bad things were. Instead I presented the facts as a report, explained the risks and proposed an action plan with a clear timeline.

Match the tool to the task, not the task to the tool. Django for the API, React for the interactive UI, Rust for data processing, Astro for static pages. Each technology solves a specific problem. A monolithic framework for everything is a recipe for compromises.

AI is a tool, not magic. The AI pipeline processed 1,247 pages in 4 hours, work that would have taken weeks manually. But every result required human verification. AI generated suggestions, a human made decisions. A custom model on own infrastructure gives full control over client data.

Rust justifies itself in specific cases. I did not write the entire platform in Rust. I wrote one microservice in Rust, the one that processes 200,000 records, and there the performance difference is an order of magnitude. The rest of the system works perfectly in Python and TypeScript.

SEO migration is not optional. It is mandatory. Without URL mapping and 301 redirects the client would have lost what remained of his organic traffic. Thanks to a proper migration traffic grew by 340% in 4 months.


#Do you need rescue after a bad agency experience?

If your platform was built on outdated technologies, is slow, insecure or simply does not work as it should, contact WPPoland. I will carry out a free initial audit and present a remediation plan with a clear timeline and scope of work.

Every rescue project starts with one call. This client called on a Friday evening. By Monday morning he had a report. Eight weeks later he had a platform he is proud of.

Next step

Turn the article into an actual implementation

This block strengthens internal linking and gives readers the most relevant next move instead of leaving them at a dead end.

Want this implemented on your site?

If visibility in Google and AI systems matters, I can build the content architecture, FAQ, schema, and internal linking needed for SEO, GEO, and AEO.

Related cluster

Explore other WordPress services and knowledge base

Strengthen your business with professional technical support in key areas of the WordPress ecosystem.

How long does a platform migration from a legacy stack to a modern architecture take?
The time depends on the scale of the project and is agreed individually after the audit. A simple platform takes 4-6 weeks. A complex system with multiple integrations, multi-language support and interactive components takes 8-14 weeks.
Why Rust instead of Node.js or Go for microservices?
Rust offers performance comparable to C/C++ with memory safety guarantees. For tasks that require processing large datasets, search indexing and file transformations, Rust delivers a performance advantage of an order of magnitude over Node.js.
Does migrating to a new stack affect Google rankings?
A properly executed migration with URL mapping and 301 redirects protects existing rankings. Improvements to Core Web Vitals after migration typically result in organic traffic growth of 30-50 percent within 3 months.
What should you do when an agency has defrauded you and left the platform without documentation?
The first step is an initial audit: checking the code, licences, security and performance. Then recovering data and content from the existing installation. Only then planning the migration to a new platform with full documentation.

Need an FAQ tailored to your industry and market? We can build one aligned with your business goals.

Let’s discuss

Related Articles

Astro 5 or Next.js 15 - which framework should you choose in 2026? In-depth comparison of performance, architecture, use cases, and when to use each for WordPress Headless projects.
wordpress

Astro 5 vs Next.js 15: Complete Technical Comparison 2026

Astro 5 or Next.js 15 - which framework should you choose in 2026? In-depth comparison of performance, architecture, use cases, and when to use each for WordPress Headless projects.

How to start as a WordPress developer in 2026. Local environment, theme and plugin development, REST API and headless paths, security and Core Web Vitals. A practitioner playbook that does not waste your first month.
wordpress

WordPress development tutorial: a comprehensive guide for beginners in 2026

How to start as a WordPress developer in 2026. Local environment, theme and plugin development, REST API and headless paths, security and Core Web Vitals. A practitioner playbook that does not waste your first month.

Learn when a website rebuild is necessary. 7 measurable technical and business signals that your site needs modernization in 2026.
wordpress

When to rebuild your website? 7 signs it's time for a redesign

Learn when a website rebuild is necessary. 7 measurable technical and business signals that your site needs modernization in 2026.