SwisserAI LogoSwisserAI
AIox_libWorkflow

Why Your AI Keeps Inventing ox_lib Functions (and How to Fix It)

Generic AI keeps inventing ox_lib, oxmysql and QBCore exports that don’t exist. Here’s why it happens and four fixes — rules files, validated RAG, pre-commit checks, and dev-server smoke tests — ranked by effort vs. reward.

SwisserAI Team
9 min read

You ask ChatGPT for a simple ox_lib notification. It gives you ox_lib.notify_player(source, "You got paid"). You paste it in. Server throws attempt to index a nil value (global 'ox_lib'). You ask the model to fix it. It doubles down, adds a require('ox_lib') at the top for good measure. Now nothing loads.

If this sounds familiar you are not alone. AI hallucination in FiveM code is the single biggest tax on developers using generic coding assistants today. This post explains why it happens, and four mitigation techniques that actually work in production.

Why generic AI hallucinates ox_lib functions

Large language models do not "know" APIs the way a compiler does. They have seen millions of tokens of public ox_lib code good code, bad code, deprecated code, home-grown forks, Lua tutorials that were written for a different library entirely. When you prompt for a notification, the model produces the token sequence that has the highest probability given your context, which is not necessarily the token sequence that corresponds to a real export.

Three failure modes dominate in FiveM specifically:

1. Renamed exports the model remembers in the old shape

ox_lib has renamed exports multiple times during its lifecycle. The model was trained on code snapshots that span all of those renames. When it picks a version to generate against, it is essentially rolling dice. You end up with lib.showTextUI called with the argument shape from six months ago, or a call to an export that was split into two separate ones last release.

2. Plausible-sounding functions that never existed

This is the worst case. The model invents QBCore.Functions.GiveMoneyToPlayer(source, amount) because every word in that function name is a word that appears in real QBCore code. It is internally consistent, it reads well, and it is completely fake. Generic models will not self-correct on this they will insist the function exists and ask you to update QBCore.

3. Wrong library family

FiveM has ox_lib, es_extended, QBCore core, Qbox, menuv, qb-menu, ox_inventory, qb-inventory lots of libraries with overlapping vocabularies. The model will cheerfully mix them, giving you ox_lib notification syntax combined with a qb-menu trigger and a QBCore server event in the same 20-line snippet.

Why this matters for servers

Hallucinated calls are not just a minor annoyance. On a live server they mean:

  • Silent failures where the AI-generated code runs without errors but never actually does the thing you asked for
  • Resources that load in development but crash on the first real interaction
  • Security holes when the AI invents a server event that does not exist and your check runs on client side
  • Hours of debugging time chasing a function name that simply is not real

The last one is the killer. When a function exists but has a bug, you can read the source. When a function does not exist, you have to first convince yourself the AI lied before you can move on and if you are new to the framework, that proof can take longer than writing the thing by hand would have.

Four mitigation techniques

These are ranked roughly by effort-to-reward. Pick the ones that fit your workflow.

1. .cursor/rules markdown files (low effort, medium reward)

If you use Cursor or any AI IDE that reads rules files, drop a markdown file at .cursor/rules/fivem.md that pins down the version of each framework you are running and lists the exports you actually use. Something like:

# FiveM Framework Rules

- This project uses QBCore (not ESX, not Qbox)
- ox_lib version: 3.x (current stable)
- oxmysql version: 2.x  use MySQL.query.await / MySQL.single.await / MySQL.prepare.await
- Notifications: always lib.notify(source, { title, description, type })
- Never use "ox_lib.notify_player", "QBCore.Functions.GiveMoney",
  or "MySQL.Async.fetchAll"  these do not exist in this stack

# Correct patterns
- Server event registration: RegisterNetEvent + AddEventHandler
- Player lookup: QBCore.Functions.GetPlayer(source)
- Money: Player.Functions.AddMoney("cash", amount, "reason")

The model will still occasionally slip, but you cut the hallucination rate noticeably. Update the file when you bump a framework version.

2. Validated RAG (medium effort, high reward)

Retrieval-augmented generation against an index of the actual ox_lib / oxmysql / QBCore source code beats a rules file by a wide margin. Index the library repos locally (or use a tool that maintains the index for you), and at generation time retrieve the real function signatures relevant to the prompt and inject them into the context.

This is the approach SwisserAI uses under the hood for its grounded generation every answer goes through a FiveM-aware retriever before the model writes the first token. You can build a home-grown version with llamaindex or langchain plus the ox repos, though maintaining the index as libraries evolve is non-trivial.

3. Pre-commit export existence checks (medium effort, high reward)

Even without RAG, you can add a static validation step that catches hallucinated calls before they ship. The simplest version:

  • Parse your Lua files to extract every call to lib.*, QBCore.Functions.*, MySQL.*, etc.
  • Compare against a list of known-real functions (scraped from the library source)
  • Fail the commit if anything is unrecognised

You can wire this into .git/hooks/pre-commit or your CI. It does not catch wrong argument shapes, but it catches the worst class of hallucination (the fully-invented function name) with zero false negatives.

4. Run the output against a real resource (low effort, high reward)

The final and most underrated technique: have a disposable FiveM server running locally, drop the AI-generated resource in, and start it. If the code references a fake function, the server will scream within seconds of the resource loading. This catches more than static analysis because it exercises runtime paths.

Set up a local txAdmin or vanilla FiveM server with the frameworks you actually use. Keep it empty except for the AI output under test. Spin it up, check the console, spin it down. It adds ten seconds to the feedback loop but it saves hours of debugging later.

Combining techniques

The real wins come from stacking two or three of these. A rules file plus a pre-commit check catches the easy hallucinations at zero per-request cost. Add validated RAG on top and the model starts getting things right the first time instead of being corrected after the fact.

If you want the stacked version without the setup work, that is essentially what a FiveM-specific AI like SwisserAI ships as a product grounded generation plus an export linter plus framework-aware prompts, all running on FiveM-tuned models rather than a general-purpose assistant. The same techniques, maintained for you.

Takeaway

Generic AI assistants are powerful, but "powerful" and "accurate for FiveM" are not the same thing. The default behaviour when you hand a generic model an ox_lib question is confident hallucination. Pinning down your stack in a rules file, validating outputs before commit, and ideally running the code against a real server catches most of the damage.

The rest the edge cases, the subtle argument-shape bugs, the "this function is real but deprecated" misses is where a FiveM-tuned model earns its keep. Use whichever layering of these techniques makes sense for your team.

Back to blog

Ship FiveM with grounded AI

Start with 250 free credits. No card. Use SwisserAI on the web or in your IDE via the OpenAI-compatible API.

Keep reading