PHP-ORT

Machine Learning Inference for the Web

First-class ML capabilities for PHP developers

78% of websites use PHP
8x faster with AVX2
possibilities

The Inevitable Transformation

Software is changing faster than we've seen in 25 years. Machine learning isn't just becoming important, it's becoming essential. Every application, every website, every digital interaction will soon expect intelligent features as standard.

For millions of PHP developers who power the web, this creates an existential challenge: stay relevant in an AI-first world or risk obsolescence.

The Stakes: This isn't about technology preferences, it's about livelihoods. Mortgages, families, careers built on PHP expertise.

The Current Reality

  • PHP dominates the web: 78% of websites use PHP as their server-side language
  • ML is becoming essential: Every modern application needs intelligent features
  • The gap is growing: No first class machine learning support for PHP
  • The choices are terrible: Burden your stack with microservices, API calls, or other inefficient methodology (eg. FFI), or switch stacks.
<?php // This is what PHP developers face today $user_input = $_POST['message']; // No native way to do this in PHP: // $sentiment = analyze_sentiment($user_input); // $classification = classify_text($user_input); // $recommendation = get_recommendation($user_profile); // Instead, developers must: // 1. Call external Python services // 2. Use slow REST APIs // 3. Learn entirely new stacks // 4. Or simply go without ML capabilities

Why This Matters

When we mention numbers like 78%, we are meant to offer all kinds of caveats about how servers report their software and so on ...


To focus on the actual number is to miss the point entirely;

It is without doubt that PHP is the most widely distributed (at the server side) web programming language on planet earth.

If ML inference becomes a first class citizen in PHP, we democratize machine learning at unprecedented scale:

Every PHP developer becomes capable of building intelligent applications, but more importantly, they are able to innovate in this space.

Genuine innovation is a far reach when what you are able to do is mostly determined by remote endpoints. Innovation comes much easier when the power is at your fingertips ...


As tradition demands, a caveat: The vision of the future presented herein may never come to pass, maybe AI is vaporware. The relative cost of behaving as if that future will come to pass is small, whatever the future brings.

The Solution: Production-Ready ML Inference

PHP-ORT brings first-class machine learning inference to PHP through rigorous engineering and performance optimization.


PHP-ORT is foundational infrastructure for AI in PHP, enabling developers to build intelligent applications with familiarity, and ease.

The combination of a core Tensor API, a high performance Math library and optional integration with ONNX, PHP-ORT opens the door to direct inference in PHP, at scale.

Try It Now

Performance That Matters

Elementwise Math

0.4ms (>=1 gflops)

1000×1000 float32 matrices

SIMD Acceleration

8x

speedup with AVX2

Hybrid Parallelism

100%

CPU utilization

Complementary, Not Competitive

This is about finding PHP a place in an AI-first software engineering world, the one that's coming ... not only does AI need to be a first class citizen in the minds of developers, but also in the languages they use:

  • Python: Training, research, model development
  • PHP: Web inference, production serving, real-time processing
  • ONNX: The bridge between training and inference

This isn't about replacing Python for training!

Technical Excellence

This isn't a proof of concept, it's production-ready infrastructure built to exacting standards.

Core Architecture

PHP API Layer - Clean, decoupled API: ORT\Tensor, ORT\Math functional namespace, ORT\Model, ORT\Runtime
Math Library
Frontend: Dispatch, scheduling, and scalar fallbacks (always available)
Backend: Silent SIMD optimizations (NEON, AVX2, SSE4.1, SSE2)
ONNX Integration
ORT\Model: Loading, metadata, and management
ORT\Runtime: Inference
ORT\Tensor - Immutable API; always available

Key Technical Innovations

  • Immutable Tensors: Zero-copy, lock-free data sharing, efficient memory usage, and predictable performance
  • Dual Storage Class Tensors: ORT\Tensor\Persistent survive request cycles (and may be shared among threads, ie FrankenPHP), ORT\Tensor\Transient are request local
  • Memory Management: Aligned allocator and optimized memcpy for maximum performance potential
  • Thread Pool: Dedicated Slot scheduling implementation with alignment awareness
  • SIMD Optimization: Runtime detection with thread pinning ensures stability and predictability at scale
  • Type System: Schemas extracted from NumPy runtime; no guesswork, perfect compatibility
  • Type Promotion: Automatic conversion between types for seamless integration and maximum predictability
  • Zero Overhead Optimizations: Backend silently (opaque to the frontend) optimizes dispatch
  • Call Site Scaling: ORT\Math\scale provides a high degree of control over scaling at the call site
  • Modular Design: Math and ONNX systems are independent - use either or both as needed
  • Flexible Tensor Generation: ORT\Tensor\Generator provides flexible generation (lazy loading, random, etc)

Performance Benchmarks

Benchmark Chart

All benchmarks are reproducible and available in the bench directory.

Getting Started

# Install dependencies sudo apt-get install libonnxruntime-dev pkg-config # Build extension phpize ./configure --enable-ort make sudo make install # Add to php.ini extension=ort.so
<?php // Check your system capabilities echo "Backend: " . (ORT\Math\backend() ?: "scalar") . "\n"; echo "Cores: " . ORT\Math\cores() . "\n";

For the avoidance of any doubt; The design and features of the extension are production-ready: The code is in heavy development and releases will be announced.

Addressing the Inevitable Criticisms

Let's be honest about the scope and address the skeptics directly.

"No ML ecosystem around PHP"

Response:

Ecosystems need a functional center to grow around. This is that center. You can't have an ecosystem without foundational infrastructure.

"PHP isn't suitable for HPC"

Response:

The thing that makes Python suitable for ML isn't the interpreter or whether or not they have an effective JIT compiler (they don't); it's the ecosystem and libraries built around it.

"Type requirements too complex"

Response:

We extract schemas from NumPy runtime. No guesswork, no compatibility issues; strict adherence to proven standards.

"PHP threading is dangerous"

Response:

Both wrong, and irrelevant: No interpreter threads are used, threads are used at the layer underneath the extension to distribute computation.

"Performance claims exaggerated"

Response:

Every benchmark is reproducible. Code is inspectable. Thread pool is visible. These are engineering results, not marketing numbers.

"One person can't maintain this"

Response:

Valid concern. But major PHP features have historically been driven by individuals who proved the concept and attracted contributors.

Reality Check: What's Actually Realistic

What's NOT being claimed:
  • PHP will dominate ML training
  • Python will become irrelevant
  • This will change the industry overnight
  • Every PHP developer will become an ML expert
What IS being claimed:
  • PHP developers can now do ML inference without leaving PHP
  • The web (mostly PHP) can now have first-class ML capabilities
  • This provides foundation for PHP ML ecosystem to grow
  • Performance is competitive with any inference solution

The Scope

This focuses specifically on inference, not the entire ML stack. It's not trying to replace Python for training, just enabling inference where PHP already dominates.

The Track Record

This was developed by the same person who delivered numerous core improvements, and many complex features by extension to PHP including threading; You may already depend on infrastructure they developed!


I am @krakjoe, what more do you want?

The Path Forward

Success doesn't require changing the world overnight, it requires building the foundation for sustainable growth.

Realistic Timeline

Technical Validation ✓

Production-ready implementation with rigorous testing

Early Adoption

Get production deployments and real-world validation

Community Building

Attract contributors and framework integrations

Ecosystem Growth

Enable domain-specific libraries and tools

Standard Practice

Become the standard way to do ML inference in PHP

The Opportunity

PHP's web dominance provides a unique distribution advantage. If ML inference becomes as easy as database queries, it will spread organically across the web ecosystem.

<?php // The future: ML as common as database queries $user = User::find($id); $recommendations = ML::recommend($user); $sentiment = ML::analyze($comment); $classification = ML::classify($image); // Just as natural as: $posts = Post::where('published', true)->get();

Real-World Impact

  • Developer Retention: Millions of PHP developers stay relevant
  • Web Evolution: Every PHP website becomes a potential ML application
  • Accessibility: ML inference becomes as common as database queries
  • Innovation: New applications emerge when barriers are removed

Join the Evolution

This isn't about revolution, it's about evolution. PHP has always adapted to stay relevant. From simple scripts to enterprise applications, and now to intelligent applications.

The Vision: Every corner of the web becomes capable of intelligent behavior. PHP developers don't need to choose between staying with PHP and building the future, they can do both.

Machine learning inference isn't a luxury anymore, it's becoming essential infrastructure. This is PHP's answer to the AI revolution.