Main Menu

Recent posts

#11
Introductions / Hi, Hamuki here :)
Last post by Hamuki - Mar 01, 2026, 02:53 PM
Hey everyone!

Im the creator of the forum.

A short intro to who I am, and what I am mainly interested in :D

Im late 20s, got a small kid and a wife.
I have been a nerd all of my life, first with hardware and lots of gaming.

Later on, I got an entry level IT education and have spent many years playing around with crypto, creating wordpress based sites and using SEO to gain traffic, alongside some affiliate marketing to earn some beermoney.

I have always found AI to be an interesting subject, but it wasnt until OpenAI released ChatGPT 3, and I was hooked. Used it for a long time, but I was also getting interested in running local models.

At first, I started out with Stable Diffusion, using the Automatic1111 UI for image generation.
But a few months ago, I finally made the switch to ComfyUI.

I also started learning a little about text models, and found aihorde.net, which really got me going in terms of playing around with text models.

The latest thing I tried, was downloading OpenClaw and hooking it up to Codex.
I haven't spent much time with it, as I simply have been too busy, and had a hard time finding useful info about setting it up.

Which is why I also made this forum :)

/Hamuki
#12
Announcements / Welcome to TMF!
Last post by Hamuki - Feb 26, 2026, 12:00 PM
Welcome to TMF!
What is TMF?
The Model Forum is an attempt of creating a strong community of likeminded AI enthusiasts.
This is a place for discussion about AI Models, usecases, hardware and new developments in the subject.

Why make TMF?
Right now, there are lots of subreddits, youtube channels, discord servers and other places to learn more about AI.
But most of these places, are overrun by users that either attempt to sell you their course, software or simply write things/scenarios that are not true.

How many times have you read a post on Reddit, where the title looks something like:
"I created a whole business with an AI agent team, you can too!".

But in the end, all you read is a big story, with big claims of a lot of great achievements..
And there is no guide, no actual useful information of how you can do this yourself and 90% of the time, these posts are simple AI slop thrown together by ChatGPT, Claude or some other AI.

This is why TMF will be different!
We will not allow these spammy attention seeking threads on this forum.
Sure, we will have a section for some self promotion, where those who created something they wish to sell, are allowed to shill their project.

But threads that are nothing but "bait", will be deleted.
If users repeatedly create these threads after being warned, they will be banned from accessing TMF again.

What is the focus of TMF?
My personal reason for making TMF, is to combat the behavior mentioned above, so the rest of us can talk about local ai models, the hardware we use to run it on, the use cases and everything in-between that.. Without the noise.

I hope to be able to build this forum in a way, that allows people with little technical knowledge to quickly know where to start tinkering with AI. 
So they can learn the basics fast, and start playing around, building and share their knowledge with others.

Some basic rules

  • No adult content sharing is allowed.
    - Do not share images or discuss adult topics on the forum.
    - You are allowed to discuss models which are capable of creating adult content, but do not talk about how to make this content here.
  • No AI agent users.
    - OpenClaw and other agent frameworks are cool, but on this forum, only human interaction is allowed.
    - AI agents have their own platforms, and this forum is not one of them.
  • Be nice.
    - It's really simple.. Don't be an asshole and treat others with respect. We are here to help each other.
  • Use the correct boards for the thread you wish to post.
    - Everyone can make mistakes, but do not make a thread about ChatGPT in the Local Text board.
  • Do you think a feature is missing? Let us know on the feedback board.


#13
Show & Tell / Template - Show & Tell
Last post by Hamuki - Feb 24, 2026, 02:20 PM
Hi,

I made a template for creating a "Show & Tell" thread with ease.
You are free to use the template if you want to, it is not a requirement.

Remember you can modify the editor to view the formattet text, instead of the raw BBcode.


[center][size=16pt][b]Show & Tell — My AI Hardware Setup[/b][/size][/center]

[size=13pt][b]1) Quick Overview[/b][/size]
[b]Primary use:[/b] (chat / coding / image gen / video / fine-tuning / agents / other)
[b]Main goal:[/b] (speed / quality / budget / low power / portability / quiet / etc.)
[b]Total price:[/b] (currency + rough date bought)
[b]Build type:[/b] (desktop / laptop / server / multi-node / cloud + local hybrid)


[size=13pt][b]2) Hardware Specs[/b][/size]
[b]CPU:[/b]
[b]GPU(s):[/b] (model + VRAM per GPU)
[b]RAM:[/b] (capacity + speed if known)
[b]Storage:[/b] (NVMe/SATA + size + where models live)
[b]Motherboard:[/b] (optional)
[b]PSU:[/b] (optional)
[b]Cooling:[/b] (air/AIO/custom + notes)
[b]Case / Form factor:[/b] (optional)
[b]Network:[/b] (ethernet/wifi + speed, optional)


[size=13pt][b]3) Software & Stack[/b][/size]
[b]OS:[/b] (Windows / Linux distro / macOS)
[b]GPU drivers:[/b] (version if you know it)
[b]Runtime / tools:[/b] (Ollama / llama.cpp / vLLM / koboldcpp / text-gen-webui / ComfyUI / A1111 / LM Studio / etc.)
[b]Container setup:[/b] (Docker / none / compose link, optional)
[b]Remote access:[/b] (SSH / RDP / Tailscale / reverse proxy, optional)


[size=13pt][b]4) Models I Use[/b][/size]
Text models: (name + quant + context size)Coding models:Image models: (SDXL / Flux / etc.)Video / audio models:Embeddings / rerankers:

[size=13pt][b]5) Performance Benchmarks[/b][/size]

[b]Text generation (tokens/sec):[/b]
Model + settings: (e.g., 7B Q4_K_M, ctx 8k, batch size if known)Prompt size: (approx tokens)Generation speed: (___ tok/s)Notes: (GPU offload? CPU only? streaming?)
[b]Image generation speed:[/b]
Model:Resolution:Steps / sampler:Time per image:
[b]VRAM/RAM usage notes:[/b]
(list your typical usage and limits)


[size=13pt][b]6) Workflow & Use Cases[/b][/size]
[b]Typical tasks:[/b] (coding, RAG, agents, roleplay, summaries, local automation, etc.)
[b]Tools around the model:[/b] (IDE plugin, API gateway, OpenClaw, n8n, LangChain, etc.)
[b]Context/RAG setup:[/b] (none / embeddings db / file chat / vector db)
[b]Most common bottleneck:[/b] (VRAM / RAM / CPU / disk / heat / noise)


[size=13pt][b]7) Power, Thermals, and Noise[/b][/size]
[b]Idle power draw:[/b]
[b]Load power draw:[/b]
[b]Temps under load:[/b]
[b]Noise level:[/b] (silent / noticeable / jet engine)
[b]Any undervolt/OC settings?[/b]


[size=13pt][b]8) What I'd Upgrade Next[/b][/size]
[b]Next upgrade plan:[/b]
[b]Budget range:[/b]
[b]What I'm trying to improve:[/b] (tok/s, VRAM, efficiency, stability, etc.)


[size=13pt][b]9) Photos / Screenshots[/b][/size]
Build photo(s):Benchmark screenshot(s):
#14
Feedback & Suggestions / Prefix suggestions
Last post by Hamuki - Feb 23, 2026, 01:54 AM
Hi,

As you might have noticed, we offer the use of Prefixes in titles, to make it easier to find interesting threads to read.

But we might not have everything you are looking for, so if you think one or more prefixes are missing, please let me know below.

Please give me the name, the boards it should be available in and if you have a suggestion for the color (please add a color code if possible), then I can add it.

/Hamuki
#15
Text Generation / CodingFavorite Model for Coding
Last post by Hamuki - Feb 22, 2026, 01:37 PM
Which model(s) do you prefer to use for coding?