LLM Fine-Tuning and Inference for Historical Research
Secure model training and deployment for DH applications
A researcher from the Austrian Academy of Sciences (ÖAW) reached out with requirements for local LLM training and inference capabilities. After completing an AI study program and extensive hands-on experience with prompting, RAG systems, and local LLMs, they are now looking to scale beyond what is possible with consumer hardware....
[Read More]