Tuned, by Axolotl

Tuned, by Axolotl

Share this post

Tuned, by Axolotl
Tuned, by Axolotl
Enabling Long Context Training with Sequence Parallelism in Axolotl (v0.8.0)
Copy link
Facebook
Email
Notes
More

Enabling Long Context Training with Sequence…

Apr 2
4

Share this post

Tuned, by Axolotl
Tuned, by Axolotl
Enabling Long Context Training with Sequence Parallelism in Axolotl (v0.8.0)
Copy link
Facebook
Email
Notes
More

Training large language models (LLMs) with long contexts has become an important capability as models continue to expand in both size and context length.

Read →
Comments
User's avatar
© 2025 Axolotl AI
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share

Copy link
Facebook
Email
Notes
More