instagib 7 days ago

Through a well-structured setup, I have successfully obtained local models that provide reliable sources for their data per line, ensuring the validity of each response. However, as the complexity of prompts increases, it becomes necessary to utilize larger models. This approach reduces the context window available to the model.

Draft models, when appropriately configured and supported by model quantizations, can also be beneficial. It is worth noting that some models may encounter quantization issues, as discussed in a Reddit thread.

I have yet to explore a multi-model playground where models engage in dialogue and exchange messages until a consensus is reached. There are several videos available that demonstrate this concept, which could serve as an alternative approach.