No system is perfect. AGISystem2 trades generative freedom for determinism and clarity, and that choice brings boundaries you should understand before you rely on it. Capacity is finite: int8 vectors and fixed partitions cannot hold every nuance. For most domains, 2048 dimensions suffice, but extremely rich domains may require 4096 and disciplined masking to keep distances meaningful. If you dump unpruned features into the empirical tail, the geometry will blur and retrieval will lose contrast.

Language normalization is another bottleneck. The engine only accepts a constrained grammar; if the TranslatorBridge mangles a sentence, the system will encode the wrong point with total confidence. Pin deterministic prompts and models, log their versions, and be willing to reject inputs you cannot normalize cleanly rather than guessing. This is a human-in-the-loop responsibility.

Retrieval via LSH is approximate by design. Mis-tuned hashes, bands, or bucket widths can hide true neighbors or drown you in false ones. We always refine with exact masked L1, but you still need to pick parameters per profile and monitor hit rates in production. Determinism helps you reproduce issues, but it does not replace tuning.

Clustering is heuristic. Thresholds that are too tight fragment concepts; thresholds that are too loose smear distinct senses together. Start simple, log every split and merge, and review drift over time. Use ValidationEngine to surface counterexamples when clusters collide; adjust thresholds with evidence, not guesswork.

Performance is intentionally CPU-only. Large corpora or heavy queries demand batching, thoughtful indexing, and perhaps profile changes. There is no GPU or WASM acceleration in the current contract, so plan capacity accordingly. If you need more speed, start with profile tuning and LSH parameter sweeps before considering architectural changes.

Theory management is a social problem as much as a technical one. Layers that are misordered or left ambiguous will trigger conflict stops. This is a feature—better to halt than to hallucinate—but it means you must curate layers and precedence explicitly, especially in overlapping domains like law or compliance. Provenance will tell you which layers collided; governance must decide how to resolve them.

To avoid unbounded work, the engine also enforces hard limits on certain operations. Reasoner has an upper bound on the number of steps it may take when expanding transitive relationships or exploring search spaces; if that bound is exceeded, the answer is reported as unknown or timed out rather than letting the process hang. TemporalMemory, likewise, refuses to rewind more than a configured number of steps in a single call. These limits can be adjusted through configuration, but they should always be finite. They act as guardrails: they cannot make a bad theory good, but they can prevent a bad theory from freezing the system.

Finally, remember that AGISystem2 is not a language stylist. It will not invent smooth prose; it will tell you whether a geometric condition holds. It shines when you value being right, reproducible, and explainable over being eloquent. Knowing these limits up front lets you deploy it where it fits and avoid surprise when it refuses to speculate.