AI

Overview of current OpenAI services and products
Looking closer and playing with sentence models as a filter step to real-time updates into a transformer model
A deep dive into how transformers separate syntactic structure from semantic meaning using attention heads, layer depth, and learned latent spaces.
An intuitive look into how transformer models like LLMs train, learn attention patterns, and refine knowledge via backpropagation, with a nod to stochastic techniques like Monte Carlo methods.
Some notes on how attention heads in a transformer model develop through training, are used in the model and combined to provide final weights.