BetaThis website is in beta. Information is preliminary and subject to change.
In development & production

Built and running.

Selected projects from Axiforge Systems — spanning cognitive software, proprietary mechanical R&D, and autonomous systems.

OperationalLive — Production Ready

LED Strip Production Line

Embedded Systems · AI Quality Control · Industrial Automation

Fully operational LED strip production line built on custom embedded control systems and AI-powered quality inspection. Demonstrates Axiforge Systems' capability to deliver end-to-end industrial automation — from embedded firmware to AI inference on the production floor.

Embedded SystemsAI Quality ControlIndustrial AutomationSTM32Computer Vision
LiveAI Platformmemvo.pl

Memvo — AI Language Immersion Platform

Memory Palace AI · LiveLens Camera · LensCompanion Device

The world's first platform combining Memory Palace AI, real-time camera immersion, and offline hardware learning into a single English acquisition ecosystem. Three modules — web, mobile PWA, dedicated device.

Visit memvo.pl →
Patent PendingMechanical R&DIPO 2025

Novel Transmission Mechanism

UK patent application pending · IPO 2025

A novel mechanical transmission concept developed by Axiforge founder Matthew Foster. The mechanism achieves improved torque density and efficiency in a reduced form factor — with direct applications in UAV propulsion, actuator systems, and mobile robotics. Prototype validated. Full technical details available under NDA.

Licensing enquiries →
Featured ProjectWorld's First AI Language Learning Device

Memvo LensCompanion

LensCompanion is a dedicated English learning device that runs entirely offline — no cloud, no subscription for AI, no privacy concerns. Built on Raspberry Pi 5 with Sony's IMX500 image sensor, it is the only language learning device in existence that uses on-chip neural network acceleration for real-time object detection.

Raw video frames are processed directly on the camera sensor's built-in AI chip and never transmitted — not to the cloud, not even to the host processor. This is privacy-by-hardware architecture, not a policy promise.

ProcesorRaspberry Pi 5 (16GB RAM)
KameraSony IMX500 — on-chip AI accelerator
DetekcjaEfficientDet-Lite0 · 30fps · 80+ klas · 0% CPU
STTWhisper.cpp small.en · offline · ~250ms latencja
TTSPiper TTS en-GB · offline · ~150ms latencja
Dialog LLMQwen2.5 3B Q4 · offline · 5–8 tok/s
Vision LLMMoondream2 Q4 · offline · opis sceny w ~4s
PołącznośćWiFi sync z memvo.pl · BT słuchawki
Autonomia~4–5h (powerbank 10Ah) lub zasilanie ciągłe
Koszt API$0.00 per sesja (wszystko lokalnie)

Three Modes of Operation

HOME MODE

Always On, Always Learning

Passive 24/7 operation. The IMX500 detects objects in your environment — a laptop, a coffee cup, a book — and the device quietly whispers the English name with IPA pronunciation. Every 30 seconds, PoseNet detects your body posture and teaches action verbs: "You are sitting", "You are reaching", "You are walking." Power consumption: ~3W. Silent operation.

STUDY MODE

Your Pocket Language Tutor

Active vocabulary sessions using words from your Memvo Memory Palace. Three GPIO buttons: Know / Don't Know / Next. Qwen AI explains words you struggle with. Results sync to memvo.pl after each session.

WALK MODE

AI Narrates Your World

Take the device on a walk. Moondream2 describes what the camera sees every 8 seconds, weaving in your palace vocabulary naturally. Push-to-talk button for two-way conversation with Qwen AI. Speech recognition via Whisper.cpp — all processed locally.

☀️ MORNING REPORT

Every morning at 7:00, LensCompanion delivers a personalised English summary: which objects you saw most, new words encountered for the first time, active hours, and an AI-generated language tip from Qwen for your most-seen word.

Custom Oxford 3000 Detector — In Development

A custom IMX500 model trained on Oxford 3000 vocabulary will replace the generic COCO detector, enabling recognition of 400+ Oxford words directly on the camera chip.

Humane AI Pin: $700. Failed.
Rabbit R1: $200. Failed.
Both failed because they tried to do everything for everyone.
LensCompanion does one thing: English acquisition.
It does it better than any device ever built.

Part of the Memvo ecosystem — memvo.pl

Enquire about LensCompanion →

Live demo — Memvo

Try the web platform

Fully interactive — embedded directly from production.