Are the local AI models trainable or is it basically static? Have not really looked much into it but I think it would be kind of cool to experiment with training models from scratch. I cannot afford hardware at this point though to run any models. I don't actually have all that much compute power in my server rack overall. Most of it is 10+ year old hardware.
There are basically 4 kinds of local AI models:
1
. Prompting (Q&A via temporary memory)
2.
Fine-tuning (ex. using LoRA's to change weights to modify the internal neural connections)
3.
Retrieval Augmented Generation aka RAG (reads from your database using stuff like LangChain)
4.
Full model training (
minimum of a million bucks &
thousands of GPU's!)
ChatGPT combines all 4 as a service:
1. You ask it questions
2. It is fine-tuned (say to be sycophantic, i.e. have a helpful & encouraging tone)
3. It retrieves data from the Internet so that it doesn't need to be retrained 24/7
4. It was trained on everything it could find
You can
easily spend $100 million doing Full Model Training. That's where your private AI database goes full-on Kirby & vacuums up EVERYTHING in the world, which requires MASSIVE amounts of power! Which is where late-stage capitalism comes into play, which is
really the part that people have a problem with:
1. Full-on theft (art, writing, movies, etc.)
2. Bad neighbors (water usage, electricity price hikes, RAM/GPU price spikes, etc.)
3. Lack of accountability (ex. mental health issues)
And of course, then China comes along & copies all of America's work, releasing KIRF's at a fraction of the price or even for free. It famously started when DeepSeek cloned ChatGPT & more recently when they "distilled" Anthropic's database:
Which is kind of like the pot calling the kettle black, because they
themselves stole the original data & literally got sued for a billion dollars lol:
Hence the AI bubble: everyone is competing in the new market & as a new source of weapons (data, robotic development acceleration, etc.), but nobody knows how anyone is going to make any real money with it, so it's a HUGE bet on future value!
Now that AI has become useful to consumers, the next steps are:
1. Exploring niches
2. Making it efficient
3. Agents
Exploring niches:
As far as development goes, Hugging Face is the Github of new AI tech, with over 2 million models available. This is where much of the R&D and new features are sourced from:
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
huggingface.co
So now we're seeing fine-tuned services such as Perplexity, which is an AI-powered news & search engine:
Sesame, for realistic advanced voice interactions as a consumer-facing interface to ANY system:
app.sesame.com
NotebookLM, for your own personal knowledge database with web research added:
Meet NotebookLM, the AI research tool and thinking partner that can analyze your sources, turn complexity into clarity and transform your content.
notebooklm.google
Suno Studio to translate music idesa into songs quickly:
Studio is only available with a Premier plan Suno Studio is a groundbreaking web-based Generative Audio Workstation (or GAW ) that merges traditio...
help.suno.com
Freepik, which is a service that gives you access to the best photo & video models: (essentially like Photoshop on steroids!)
Making it efficient:
Now that AI has all of the data in the world & is getting refined in various niches, the next step is shrinking the algorithms & datasets. The big news this month is that QWEN has released an at-home model that is equivalent to Anthropic's cloud offering:
Which you can realistically run on a modern Mac Mini!!
"
You can now run the Qwen3.5 Small models locally on just 6GB Mac / RAM device"
Or even a Raspberry Pi now!
"
Orange pi 5 max + ubuntu it free if you use Llama-3.2-3B , eepSeek-R1-Distill-Qwen-7B or Phi-3.5-mini"
Agents:
The next big things are
Agents, which are virtual thinking "robot" assistants! They have 3 parts:
1. The AI brain (local or online service)
2. The tools (plugins to do various tasks)
3. Memory (like having a personal assistant!)
The big leap recently was OpenClaw:
OpenClaw — The AI that actually does things. Your personal assistant on any platform.
openclaw.ai
Which is a free, private DIY agent that can:
1. Talk to local or cloud AI models
2. Access all of your hardware, software, files, and accounts
3. Basically do anything YOU want it to do!
So now you can:
1. Run QWEN locally
2. Run OpenClaw locally
3. Control what it does & what it has access to, for
free!
For example, you can add a local voice AI that is equivalent to ElevenLabs. You can build a PBX VOIP telephony system for business, a custom smartphone voice, etc.! Try it out here:
This app lets you turn written text into spoken audio. You can create a brand‑new voice by describing its style, clone a voice from an uploaded audio sample (with optional transcript), or pick a pr...
huggingface.co
If you want to play with AI locally, grab the LM Studio app store:
Run local AI models like gpt-oss, Llama, Gemma, Qwen, and DeepSeek privately on your computer.
lmstudio.ai
For local image generation, download Comfy for free:
The most powerful and modular visual AI application and engine
www.comfy.org
Anyway...
local AI with REAL performance is within reach for consumers as of
this month! FMT is still best left to companies with huge pockets & render farms; the
real fun s in customizing what's readily-available to DO cool stuff for you! Smarthome, home theater, custom computers, personal robots, business automation, high-speed software development, website design, server customization, etc.!
If you like working with images, dive into ComfyUI at home for REE! Note that image generation
does tend to like beefy gaming cards as opposed to Pi hardware, haha! But this is one of the areas of local AI that you really CAN train & tweak & customize to get stellar personalized results!
Comfy UI tutorials
The latest big news is that LTX 2.3 is now available for the home desktop for FREE!
Open-source local video generation
While it really needs at
least 32GB VRAM to be fast, it can run on as little as a 6GB GPU (~$200) to start playing with, which means even a budget gaming laptop can create AI videos & cartoons! Especially if you're a student with a great idea & no million-dollar film budget, this is INCREDIBLE news!!
Go out there & create some CRAZY stuff on your home computer! Make the next great Anandtech jungle action flick about "moving out to wilderness"!! 😍 😍 😍
