Why bury the pricing information under the documentation? The problem with these platforms is that it is unclear how much bandwidth/money your use case will require to actually train and run a successful LLM.
The world needs products like this that are local first and open source. Enable me train an open source LLM on my M2 Macbook with a desktop app then I'll consider giving you my money. App developers integrating LLM's need to be able to experiment and see the potential before storing everything on the cloud.
We are working on a dedicated pricing page with all relevant information. Pricing in docs is just temporary. With that being said, new users get free credits to try out the platform without spending anything.
We've built the platform primarily for companies that serve LLMs in production, so even if we allowed you to fine-tune on device, sooner or later you will find yourself in a position where you want to deploy the model.
We want to streamline this whole process, end-to-end.
With that being said, I do agree that we shouldn't store everything on the cloud, this is what we're doing about it:
1. Any data in FinetuneDB like evals, logs, datasets etc. can be exported or deleted.
2. Fine-tuned model weights for OS models can be downloaded.
3. Using our inference stack is not a requirement. Many users are happy with only the dataset manager (which is 100% free).
4. We are exploring options to integrate external databases and storage providers with FinetuneDB, allowing datasets to be stored off our servers
The world needs products like this that are local first and open source. Enable me train an open source LLM on my M2 Macbook with a desktop app then I'll consider giving you my money. App developers integrating LLM's need to be able to experiment and see the potential before storing everything on the cloud.