Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
lawlessone
on Dec 17, 2024
|
parent
|
context
|
favorite
| on:
New LLM optimization technique slashes memory cost...
doesn't training require inference? so i guess it would help there too?
boringg
on Dec 17, 2024
|
next
[–]
Yeah but training requires the larger memory deployment data center infra
fzzzy
on Dec 17, 2024
|
prev
[–]
Training doesn't require inference. It uses back-propagation, a different algorithm.
bitvoid
on Dec 17, 2024
|
parent
[–]
Backpropagation happens after some number of inferences. You need to infer to calculate a loss function to then backprop from.
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: