This directory contains recommended LLM API performance settings for popular models. They can be used out-of-the-box with trtllm-serve via the --config CLI flag, or you can adjust them to your specific use case.
For model-specific deployment guides, please refer to the official documentation.