Skip to main content

Posts

Featured

The Importance of Training Data Availability: Why Chasing Trends in Frameworks Could Limit Your LLM Experience

As programming techniques evolve, the quality of a large language model’s (LLM) code generation is intimately tied to its training data. This correlation becomes glaringly obvious when comparing the results for well-established frameworks versus those that are relatively new or trending. The Value of Open-Source Training Data The availability of training data from open-source code repositories is crucial for LLMs to generate accurate and useful solutions. When an open-source project has been around for a while and is widely available on the web, the LLM can draw from a vast repository of examples to provide answers that align with recognized best practices. For instance, when working with Python or React, most major LLMs produce excellent code. React, for example, has matured over many years and remains popular, resulting in a wealth of training data. Whether I'm asking an LLM to generate component-based solutions or optimize state management, the model almost always returns someth...

Latest Posts

My Journey Using AI: Building Models and Leveraging Pre-Trained Solutions

Command line to pipe output of multiple files to the system copy buffer

Asking the Right Questions on a New Software Project

Weirdly this is the best way to leave a note for my future self

Still learning new Vim tricks after all these years

This is what it means be Gen-X

My kid is cool and released this EP on Spotify

Linking custom fonts React Native

Elastic Beanstalk and manage.py

Disabling dnsmasq on Ubuntu solves Chrome DNS resolution lag (for me)