Overview of current OpenAI services and products
A deep dive into how transformers separate syntactic structure from semantic meaning using attention heads, layer depth, and learned latent spaces.
An intuitive look into how transformer models like LLMs train, learn attention patterns, and refine knowledge via backpropagation, with a nod to stochastic techniques like Monte Carlo methods.
RAG architectures aren't enough in a real-time world. It's time to think about truth-aware agents that can detect and act on semantic change.
Some notes on how attention heads in a transformer model develop through training, are used in the model and combined to provide final weights.

RAG architectures aren't enough in a real-time world. It's time to think about truth-aware agents that can detect and act on semantic change.
Some notes on leadership election.
Data available from a typical CEX Crypto Exchange
Some tips about keeping Terraform modular and reusable
What is Aeron as a message protocol
What is the Disruptor Pattern
What is a Lock-Free Ring Buffer
What is Redis Redlock.
Pros and Cons of using Shared Memory to handle large data volumes in low-latency architectures
Explores the debate between Microservice Architectures and Monolithic approaches, and discover if there's a middle ground that balances complexity and flexibility in software architecture.
Comparing cloud services and attempting to map to a domain model.

RAG architectures aren't enough in a real-time world. It's time to think about truth-aware agents that can detect and act on semantic change.
Explores the debate between Microservice Architectures and Monolithic approaches, and discover if there's a middle ground that balances complexity and flexibility in software architecture.

Explores what's available in the main cloud service providers for good consistent latency and service response times.
Quick notes on common commands to help troubleshoot a Linux VM.
Some tips about keeping Terraform modular and reusable
Comparing cloud services and attempting to map to a domain model.

How Token Swaps Work on CEX, OTC, DEX, and Derivative Markets
Comparing the crypto markets with traditional financial markets.

Using all the features that PostgreSQL has
Comparing Data Types and Casting in SQL
Comparing Window Functions with standard SQL
Explores the debate between Microservice Architectures and Monolithic approaches, and discover if there's a middle ground that balances complexity and flexibility in software architecture.

How Token Swaps Work on CEX, OTC, DEX, and Derivative Markets
Overview of Options Markets and Valuation Techniques
Basic 101 Swap Market Terminology
Comparing the crypto markets with traditional financial markets.
Data available from a typical CEX Crypto Exchange
Basic 101 Bond Market Terminology
Basic 101 for equity market terminology

Overview of current OpenAI services and products
A deep dive into how transformers separate syntactic structure from semantic meaning using attention heads, layer depth, and learned latent spaces.

An intuitive look into how transformer models like LLMs train, learn attention patterns, and refine knowledge via backpropagation, with a nod to stochastic techniques like Monte Carlo methods.
RAG architectures aren't enough in a real-time world. It's time to think about truth-aware agents that can detect and act on semantic change.
Some notes on how attention heads in a transformer model develop through training, are used in the model and combined to provide final weights.

Overview of current OpenAI services and products
A deep dive into how transformers separate syntactic structure from semantic meaning using attention heads, layer depth, and learned latent spaces.
An intuitive look into how transformer models like LLMs train, learn attention patterns, and refine knowledge via backpropagation, with a nod to stochastic techniques like Monte Carlo methods.

A few notes on using GLSL to animate messages
Notes on how the simulator is constructed.

A few notes on using GLSL to animate messages
A few notes on how to encode equations in markdown
Hosting ThreeJS and React Three Fiber on NextJS
A quick introduction to hosting P5 Sketches in a NextJS blog.
Notes on how the simulator is constructed.

Using all the features that PostgreSQL has
Comparing Data Types and Casting in SQL
Comparing Window Functions with standard SQL

An intuitive look into how transformer models like LLMs train, learn attention patterns, and refine knowledge via backpropagation, with a nod to stochastic techniques like Monte Carlo methods.
Some notes on how attention heads in a transformer model develop through training, are used in the model and combined to provide final weights.

Overview of current OpenAI services and products
A deep dive into how transformers separate syntactic structure from semantic meaning using attention heads, layer depth, and learned latent spaces.

A few notes on using GLSL to animate messages
Hosting ThreeJS and React Three Fiber on NextJS
A quick introduction to hosting P5 Sketches in a NextJS blog.

A few notes on using GLSL to animate messages
A few notes on how to encode equations in markdown
Hosting ThreeJS and React Three Fiber on NextJS
A quick introduction to hosting P5 Sketches in a NextJS blog.